Test Report: KVM_Linux_crio 19644

                    
                      c0eea096ace35e11d6c690a668e6718dc1bec60e:2024-09-15:36219
                    
                

Test fail (14/222)

x
+
TestAddons/parallel/Registry (74.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 5.719878ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-hbp2b" [29e66421-b96f-416d-b126-9c3b0d11bc7f] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003834999s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ncp27" [cb62ce46-b1f1-4fef-8ada-8ff4f0dc35ce] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003896051s
addons_test.go:342: (dbg) Run:  kubectl --context addons-368929 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-368929 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-368929 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.08956528s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-368929 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 ip
2024/09/15 06:42:10 [DEBUG] GET http://192.168.39.212:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-368929 -n addons-368929
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-368929 logs -n 25: (1.535577305s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| delete  | -p download-only-832723                                                                     | download-only-832723 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| start   | -o=json --download-only                                                                     | download-only-119130 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | -p download-only-119130                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| delete  | -p download-only-119130                                                                     | download-only-119130 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| delete  | -p download-only-832723                                                                     | download-only-832723 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| delete  | -p download-only-119130                                                                     | download-only-119130 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-702457 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | binary-mirror-702457                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37011                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-702457                                                                     | binary-mirror-702457 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| addons  | enable dashboard -p                                                                         | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | addons-368929                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | addons-368929                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-368929 --wait=true                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:32 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-368929 ssh cat                                                                       | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | /opt/local-path-provisioner/pvc-37b863f6-d527-401f-89ba-956f4262c0c9_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | addons-368929                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | -p addons-368929                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | -p addons-368929                                                                            |                      |         |         |                     |                     |
	| addons  | addons-368929 addons                                                                        | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-368929 addons                                                                        | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-368929 ssh curl -s                                                                   | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-368929 ip                                                                            | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:30:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:30:34.502587   13942 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:30:34.502678   13942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:30:34.502685   13942 out.go:358] Setting ErrFile to fd 2...
	I0915 06:30:34.502689   13942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:30:34.502874   13942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 06:30:34.503472   13942 out.go:352] Setting JSON to false
	I0915 06:30:34.504273   13942 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":780,"bootTime":1726381054,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:30:34.504369   13942 start.go:139] virtualization: kvm guest
	I0915 06:30:34.507106   13942 out.go:177] * [addons-368929] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 06:30:34.508386   13942 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:30:34.508405   13942 notify.go:220] Checking for updates...
	I0915 06:30:34.511198   13942 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:30:34.512524   13942 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 06:30:34.513658   13942 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 06:30:34.514857   13942 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:30:34.515998   13942 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:30:34.517110   13942 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:30:34.547737   13942 out.go:177] * Using the kvm2 driver based on user configuration
	I0915 06:30:34.548792   13942 start.go:297] selected driver: kvm2
	I0915 06:30:34.548818   13942 start.go:901] validating driver "kvm2" against <nil>
	I0915 06:30:34.548833   13942 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:30:34.549511   13942 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:30:34.549598   13942 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 06:30:34.563630   13942 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 06:30:34.563667   13942 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:30:34.563907   13942 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:30:34.563939   13942 cni.go:84] Creating CNI manager for ""
	I0915 06:30:34.563977   13942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 06:30:34.563985   13942 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 06:30:34.564028   13942 start.go:340] cluster config:
	{Name:addons-368929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:30:34.564113   13942 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:30:34.565784   13942 out.go:177] * Starting "addons-368929" primary control-plane node in "addons-368929" cluster
	I0915 06:30:34.566926   13942 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:30:34.566954   13942 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 06:30:34.566963   13942 cache.go:56] Caching tarball of preloaded images
	I0915 06:30:34.567049   13942 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 06:30:34.567062   13942 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 06:30:34.567364   13942 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/config.json ...
	I0915 06:30:34.567385   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/config.json: {Name:mk52f636c4ede8c4dfee1d713e4fd97fe830cfd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:34.567522   13942 start.go:360] acquireMachinesLock for addons-368929: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 06:30:34.567577   13942 start.go:364] duration metric: took 39.328µs to acquireMachinesLock for "addons-368929"
	I0915 06:30:34.567599   13942 start.go:93] Provisioning new machine with config: &{Name:addons-368929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:30:34.567665   13942 start.go:125] createHost starting for "" (driver="kvm2")
	I0915 06:30:34.569232   13942 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0915 06:30:34.569343   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:30:34.569382   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:30:34.583188   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I0915 06:30:34.583668   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:30:34.584246   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:30:34.584267   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:30:34.584599   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:30:34.584752   13942 main.go:141] libmachine: (addons-368929) Calling .GetMachineName
	I0915 06:30:34.584884   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:34.585061   13942 start.go:159] libmachine.API.Create for "addons-368929" (driver="kvm2")
	I0915 06:30:34.585092   13942 client.go:168] LocalClient.Create starting
	I0915 06:30:34.585134   13942 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem
	I0915 06:30:34.864190   13942 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem
	I0915 06:30:35.049893   13942 main.go:141] libmachine: Running pre-create checks...
	I0915 06:30:35.049914   13942 main.go:141] libmachine: (addons-368929) Calling .PreCreateCheck
	I0915 06:30:35.050423   13942 main.go:141] libmachine: (addons-368929) Calling .GetConfigRaw
	I0915 06:30:35.050849   13942 main.go:141] libmachine: Creating machine...
	I0915 06:30:35.050864   13942 main.go:141] libmachine: (addons-368929) Calling .Create
	I0915 06:30:35.051026   13942 main.go:141] libmachine: (addons-368929) Creating KVM machine...
	I0915 06:30:35.052240   13942 main.go:141] libmachine: (addons-368929) DBG | found existing default KVM network
	I0915 06:30:35.052972   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.052837   13964 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I0915 06:30:35.053018   13942 main.go:141] libmachine: (addons-368929) DBG | created network xml: 
	I0915 06:30:35.053051   13942 main.go:141] libmachine: (addons-368929) DBG | <network>
	I0915 06:30:35.053059   13942 main.go:141] libmachine: (addons-368929) DBG |   <name>mk-addons-368929</name>
	I0915 06:30:35.053064   13942 main.go:141] libmachine: (addons-368929) DBG |   <dns enable='no'/>
	I0915 06:30:35.053070   13942 main.go:141] libmachine: (addons-368929) DBG |   
	I0915 06:30:35.053076   13942 main.go:141] libmachine: (addons-368929) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0915 06:30:35.053085   13942 main.go:141] libmachine: (addons-368929) DBG |     <dhcp>
	I0915 06:30:35.053090   13942 main.go:141] libmachine: (addons-368929) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0915 06:30:35.053095   13942 main.go:141] libmachine: (addons-368929) DBG |     </dhcp>
	I0915 06:30:35.053099   13942 main.go:141] libmachine: (addons-368929) DBG |   </ip>
	I0915 06:30:35.053104   13942 main.go:141] libmachine: (addons-368929) DBG |   
	I0915 06:30:35.053114   13942 main.go:141] libmachine: (addons-368929) DBG | </network>
	I0915 06:30:35.053144   13942 main.go:141] libmachine: (addons-368929) DBG | 
	I0915 06:30:35.058552   13942 main.go:141] libmachine: (addons-368929) DBG | trying to create private KVM network mk-addons-368929 192.168.39.0/24...
	I0915 06:30:35.121581   13942 main.go:141] libmachine: (addons-368929) Setting up store path in /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929 ...
	I0915 06:30:35.121603   13942 main.go:141] libmachine: (addons-368929) DBG | private KVM network mk-addons-368929 192.168.39.0/24 created
	I0915 06:30:35.121625   13942 main.go:141] libmachine: (addons-368929) Building disk image from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso
	I0915 06:30:35.121656   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.121548   13964 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 06:30:35.121742   13942 main.go:141] libmachine: (addons-368929) Downloading /home/jenkins/minikube-integration/19644-6166/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso...
	I0915 06:30:35.379116   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.378937   13964 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa...
	I0915 06:30:35.512593   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.512453   13964 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/addons-368929.rawdisk...
	I0915 06:30:35.512623   13942 main.go:141] libmachine: (addons-368929) DBG | Writing magic tar header
	I0915 06:30:35.512637   13942 main.go:141] libmachine: (addons-368929) DBG | Writing SSH key tar header
	I0915 06:30:35.512649   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.512598   13964 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929 ...
	I0915 06:30:35.512682   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929
	I0915 06:30:35.512720   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines
	I0915 06:30:35.512748   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 06:30:35.512761   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929 (perms=drwx------)
	I0915 06:30:35.512770   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166
	I0915 06:30:35.512782   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0915 06:30:35.512789   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins
	I0915 06:30:35.512796   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home
	I0915 06:30:35.512802   13942 main.go:141] libmachine: (addons-368929) DBG | Skipping /home - not owner
	I0915 06:30:35.512811   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines (perms=drwxr-xr-x)
	I0915 06:30:35.512824   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube (perms=drwxr-xr-x)
	I0915 06:30:35.512862   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166 (perms=drwxrwxr-x)
	I0915 06:30:35.512879   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0915 06:30:35.512887   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0915 06:30:35.512892   13942 main.go:141] libmachine: (addons-368929) Creating domain...
	I0915 06:30:35.513950   13942 main.go:141] libmachine: (addons-368929) define libvirt domain using xml: 
	I0915 06:30:35.513976   13942 main.go:141] libmachine: (addons-368929) <domain type='kvm'>
	I0915 06:30:35.513987   13942 main.go:141] libmachine: (addons-368929)   <name>addons-368929</name>
	I0915 06:30:35.513996   13942 main.go:141] libmachine: (addons-368929)   <memory unit='MiB'>4000</memory>
	I0915 06:30:35.514006   13942 main.go:141] libmachine: (addons-368929)   <vcpu>2</vcpu>
	I0915 06:30:35.514012   13942 main.go:141] libmachine: (addons-368929)   <features>
	I0915 06:30:35.514017   13942 main.go:141] libmachine: (addons-368929)     <acpi/>
	I0915 06:30:35.514020   13942 main.go:141] libmachine: (addons-368929)     <apic/>
	I0915 06:30:35.514025   13942 main.go:141] libmachine: (addons-368929)     <pae/>
	I0915 06:30:35.514029   13942 main.go:141] libmachine: (addons-368929)     
	I0915 06:30:35.514034   13942 main.go:141] libmachine: (addons-368929)   </features>
	I0915 06:30:35.514040   13942 main.go:141] libmachine: (addons-368929)   <cpu mode='host-passthrough'>
	I0915 06:30:35.514045   13942 main.go:141] libmachine: (addons-368929)   
	I0915 06:30:35.514052   13942 main.go:141] libmachine: (addons-368929)   </cpu>
	I0915 06:30:35.514057   13942 main.go:141] libmachine: (addons-368929)   <os>
	I0915 06:30:35.514063   13942 main.go:141] libmachine: (addons-368929)     <type>hvm</type>
	I0915 06:30:35.514068   13942 main.go:141] libmachine: (addons-368929)     <boot dev='cdrom'/>
	I0915 06:30:35.514074   13942 main.go:141] libmachine: (addons-368929)     <boot dev='hd'/>
	I0915 06:30:35.514079   13942 main.go:141] libmachine: (addons-368929)     <bootmenu enable='no'/>
	I0915 06:30:35.514087   13942 main.go:141] libmachine: (addons-368929)   </os>
	I0915 06:30:35.514123   13942 main.go:141] libmachine: (addons-368929)   <devices>
	I0915 06:30:35.514143   13942 main.go:141] libmachine: (addons-368929)     <disk type='file' device='cdrom'>
	I0915 06:30:35.514158   13942 main.go:141] libmachine: (addons-368929)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/boot2docker.iso'/>
	I0915 06:30:35.514178   13942 main.go:141] libmachine: (addons-368929)       <target dev='hdc' bus='scsi'/>
	I0915 06:30:35.514196   13942 main.go:141] libmachine: (addons-368929)       <readonly/>
	I0915 06:30:35.514210   13942 main.go:141] libmachine: (addons-368929)     </disk>
	I0915 06:30:35.514224   13942 main.go:141] libmachine: (addons-368929)     <disk type='file' device='disk'>
	I0915 06:30:35.514233   13942 main.go:141] libmachine: (addons-368929)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0915 06:30:35.514247   13942 main.go:141] libmachine: (addons-368929)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/addons-368929.rawdisk'/>
	I0915 06:30:35.514254   13942 main.go:141] libmachine: (addons-368929)       <target dev='hda' bus='virtio'/>
	I0915 06:30:35.514259   13942 main.go:141] libmachine: (addons-368929)     </disk>
	I0915 06:30:35.514272   13942 main.go:141] libmachine: (addons-368929)     <interface type='network'>
	I0915 06:30:35.514279   13942 main.go:141] libmachine: (addons-368929)       <source network='mk-addons-368929'/>
	I0915 06:30:35.514284   13942 main.go:141] libmachine: (addons-368929)       <model type='virtio'/>
	I0915 06:30:35.514291   13942 main.go:141] libmachine: (addons-368929)     </interface>
	I0915 06:30:35.514298   13942 main.go:141] libmachine: (addons-368929)     <interface type='network'>
	I0915 06:30:35.514327   13942 main.go:141] libmachine: (addons-368929)       <source network='default'/>
	I0915 06:30:35.514346   13942 main.go:141] libmachine: (addons-368929)       <model type='virtio'/>
	I0915 06:30:35.514353   13942 main.go:141] libmachine: (addons-368929)     </interface>
	I0915 06:30:35.514363   13942 main.go:141] libmachine: (addons-368929)     <serial type='pty'>
	I0915 06:30:35.514370   13942 main.go:141] libmachine: (addons-368929)       <target port='0'/>
	I0915 06:30:35.514375   13942 main.go:141] libmachine: (addons-368929)     </serial>
	I0915 06:30:35.514382   13942 main.go:141] libmachine: (addons-368929)     <console type='pty'>
	I0915 06:30:35.514401   13942 main.go:141] libmachine: (addons-368929)       <target type='serial' port='0'/>
	I0915 06:30:35.514411   13942 main.go:141] libmachine: (addons-368929)     </console>
	I0915 06:30:35.514423   13942 main.go:141] libmachine: (addons-368929)     <rng model='virtio'>
	I0915 06:30:35.514431   13942 main.go:141] libmachine: (addons-368929)       <backend model='random'>/dev/random</backend>
	I0915 06:30:35.514440   13942 main.go:141] libmachine: (addons-368929)     </rng>
	I0915 06:30:35.514452   13942 main.go:141] libmachine: (addons-368929)     
	I0915 06:30:35.514462   13942 main.go:141] libmachine: (addons-368929)     
	I0915 06:30:35.514471   13942 main.go:141] libmachine: (addons-368929)   </devices>
	I0915 06:30:35.514478   13942 main.go:141] libmachine: (addons-368929) </domain>
	I0915 06:30:35.514493   13942 main.go:141] libmachine: (addons-368929) 
	I0915 06:30:35.519732   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:97:d7:7e in network default
	I0915 06:30:35.520190   13942 main.go:141] libmachine: (addons-368929) Ensuring networks are active...
	I0915 06:30:35.520223   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:35.520835   13942 main.go:141] libmachine: (addons-368929) Ensuring network default is active
	I0915 06:30:35.521094   13942 main.go:141] libmachine: (addons-368929) Ensuring network mk-addons-368929 is active
	I0915 06:30:35.521540   13942 main.go:141] libmachine: (addons-368929) Getting domain xml...
	I0915 06:30:35.522139   13942 main.go:141] libmachine: (addons-368929) Creating domain...
	I0915 06:30:36.911230   13942 main.go:141] libmachine: (addons-368929) Waiting to get IP...
	I0915 06:30:36.912033   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:36.912348   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:36.912367   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:36.912342   13964 retry.go:31] will retry after 305.621927ms: waiting for machine to come up
	I0915 06:30:37.219791   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:37.220118   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:37.220142   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:37.220077   13964 retry.go:31] will retry after 369.163907ms: waiting for machine to come up
	I0915 06:30:37.590495   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:37.590957   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:37.590982   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:37.590911   13964 retry.go:31] will retry after 359.18262ms: waiting for machine to come up
	I0915 06:30:37.951271   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:37.951735   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:37.951766   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:37.951687   13964 retry.go:31] will retry after 431.887952ms: waiting for machine to come up
	I0915 06:30:38.385216   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:38.385618   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:38.385654   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:38.385573   13964 retry.go:31] will retry after 586.296252ms: waiting for machine to come up
	I0915 06:30:38.973375   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:38.973835   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:38.973871   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:38.973742   13964 retry.go:31] will retry after 586.258738ms: waiting for machine to come up
	I0915 06:30:39.561452   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:39.561928   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:39.561949   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:39.561894   13964 retry.go:31] will retry after 904.897765ms: waiting for machine to come up
	I0915 06:30:40.468462   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:40.468857   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:40.468885   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:40.468834   13964 retry.go:31] will retry after 1.465267821s: waiting for machine to come up
	I0915 06:30:41.936456   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:41.936817   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:41.936840   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:41.936771   13964 retry.go:31] will retry after 1.712738986s: waiting for machine to come up
	I0915 06:30:43.651694   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:43.652084   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:43.652108   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:43.652035   13964 retry.go:31] will retry after 2.008845539s: waiting for machine to come up
	I0915 06:30:45.663024   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:45.663547   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:45.663573   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:45.663481   13964 retry.go:31] will retry after 2.586699686s: waiting for machine to come up
	I0915 06:30:48.251434   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:48.251775   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:48.251796   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:48.251742   13964 retry.go:31] will retry after 2.759887359s: waiting for machine to come up
	I0915 06:30:51.013703   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:51.014097   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:51.014135   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:51.014061   13964 retry.go:31] will retry after 4.488920728s: waiting for machine to come up
	I0915 06:30:55.504672   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.505169   13942 main.go:141] libmachine: (addons-368929) Found IP for machine: 192.168.39.212
	I0915 06:30:55.505195   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has current primary IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.505204   13942 main.go:141] libmachine: (addons-368929) Reserving static IP address...
	I0915 06:30:55.505525   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find host DHCP lease matching {name: "addons-368929", mac: "52:54:00:b0:ac:60", ip: "192.168.39.212"} in network mk-addons-368929
	I0915 06:30:55.572968   13942 main.go:141] libmachine: (addons-368929) DBG | Getting to WaitForSSH function...
	I0915 06:30:55.573003   13942 main.go:141] libmachine: (addons-368929) Reserved static IP address: 192.168.39.212
	I0915 06:30:55.573015   13942 main.go:141] libmachine: (addons-368929) Waiting for SSH to be available...
	I0915 06:30:55.575550   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.575899   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:55.575919   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.576162   13942 main.go:141] libmachine: (addons-368929) DBG | Using SSH client type: external
	I0915 06:30:55.576193   13942 main.go:141] libmachine: (addons-368929) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa (-rw-------)
	I0915 06:30:55.576224   13942 main.go:141] libmachine: (addons-368929) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 06:30:55.576241   13942 main.go:141] libmachine: (addons-368929) DBG | About to run SSH command:
	I0915 06:30:55.576256   13942 main.go:141] libmachine: (addons-368929) DBG | exit 0
	I0915 06:30:55.705901   13942 main.go:141] libmachine: (addons-368929) DBG | SSH cmd err, output: <nil>: 
	I0915 06:30:55.706188   13942 main.go:141] libmachine: (addons-368929) KVM machine creation complete!
	I0915 06:30:55.706473   13942 main.go:141] libmachine: (addons-368929) Calling .GetConfigRaw
	I0915 06:30:55.707031   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:55.707200   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:55.707361   13942 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0915 06:30:55.707372   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:30:55.708643   13942 main.go:141] libmachine: Detecting operating system of created instance...
	I0915 06:30:55.708660   13942 main.go:141] libmachine: Waiting for SSH to be available...
	I0915 06:30:55.708667   13942 main.go:141] libmachine: Getting to WaitForSSH function...
	I0915 06:30:55.708675   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:55.710847   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.711159   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:55.711187   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.711316   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:55.711564   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.711697   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.711844   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:55.712017   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:55.712184   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:55.712193   13942 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0915 06:30:55.812983   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:30:55.813004   13942 main.go:141] libmachine: Detecting the provisioner...
	I0915 06:30:55.813010   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:55.815500   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.815897   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:55.815925   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.816042   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:55.816221   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.816381   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.816518   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:55.816670   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:55.816829   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:55.816839   13942 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0915 06:30:55.918360   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0915 06:30:55.918439   13942 main.go:141] libmachine: found compatible host: buildroot
	I0915 06:30:55.918448   13942 main.go:141] libmachine: Provisioning with buildroot...
	I0915 06:30:55.918454   13942 main.go:141] libmachine: (addons-368929) Calling .GetMachineName
	I0915 06:30:55.918690   13942 buildroot.go:166] provisioning hostname "addons-368929"
	I0915 06:30:55.918711   13942 main.go:141] libmachine: (addons-368929) Calling .GetMachineName
	I0915 06:30:55.918840   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:55.920966   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.921446   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:55.921474   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.921659   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:55.921826   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.921967   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.922063   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:55.922230   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:55.922377   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:55.922388   13942 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-368929 && echo "addons-368929" | sudo tee /etc/hostname
	I0915 06:30:56.039825   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-368929
	
	I0915 06:30:56.039850   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.042251   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.042524   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.042543   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.042750   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.042921   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.043023   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.043132   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.043236   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:56.043381   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:56.043395   13942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-368929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-368929/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-368929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 06:30:56.154978   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:30:56.155020   13942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 06:30:56.155050   13942 buildroot.go:174] setting up certificates
	I0915 06:30:56.155069   13942 provision.go:84] configureAuth start
	I0915 06:30:56.155094   13942 main.go:141] libmachine: (addons-368929) Calling .GetMachineName
	I0915 06:30:56.155378   13942 main.go:141] libmachine: (addons-368929) Calling .GetIP
	I0915 06:30:56.157861   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.158130   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.158164   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.158372   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.160429   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.160700   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.160725   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.160840   13942 provision.go:143] copyHostCerts
	I0915 06:30:56.160923   13942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 06:30:56.161059   13942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 06:30:56.161236   13942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 06:30:56.161313   13942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.addons-368929 san=[127.0.0.1 192.168.39.212 addons-368929 localhost minikube]
	I0915 06:30:56.248249   13942 provision.go:177] copyRemoteCerts
	I0915 06:30:56.248322   13942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 06:30:56.248351   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.251283   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.251603   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.251636   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.251851   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.252026   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.252134   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.252249   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:30:56.336360   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 06:30:56.360914   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 06:30:56.385134   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 06:30:56.408123   13942 provision.go:87] duration metric: took 253.040376ms to configureAuth
	I0915 06:30:56.408147   13942 buildroot.go:189] setting minikube options for container-runtime
	I0915 06:30:56.408302   13942 config.go:182] Loaded profile config "addons-368929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:30:56.408370   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.410873   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.411209   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.411236   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.411382   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.411556   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.411726   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.411866   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.412039   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:56.412202   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:56.412215   13942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 06:30:56.625572   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 06:30:56.625596   13942 main.go:141] libmachine: Checking connection to Docker...
	I0915 06:30:56.625603   13942 main.go:141] libmachine: (addons-368929) Calling .GetURL
	I0915 06:30:56.626810   13942 main.go:141] libmachine: (addons-368929) DBG | Using libvirt version 6000000
	I0915 06:30:56.628657   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.628951   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.628973   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.629143   13942 main.go:141] libmachine: Docker is up and running!
	I0915 06:30:56.629155   13942 main.go:141] libmachine: Reticulating splines...
	I0915 06:30:56.629162   13942 client.go:171] duration metric: took 22.044062992s to LocalClient.Create
	I0915 06:30:56.629182   13942 start.go:167] duration metric: took 22.044122374s to libmachine.API.Create "addons-368929"
	I0915 06:30:56.629204   13942 start.go:293] postStartSetup for "addons-368929" (driver="kvm2")
	I0915 06:30:56.629219   13942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 06:30:56.629241   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.629436   13942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 06:30:56.629459   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.631144   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.631446   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.631469   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.631552   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.631671   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.631765   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.631918   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:30:56.712275   13942 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 06:30:56.716708   13942 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 06:30:56.716735   13942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 06:30:56.716821   13942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 06:30:56.716859   13942 start.go:296] duration metric: took 87.643981ms for postStartSetup
	I0915 06:30:56.716897   13942 main.go:141] libmachine: (addons-368929) Calling .GetConfigRaw
	I0915 06:30:56.717419   13942 main.go:141] libmachine: (addons-368929) Calling .GetIP
	I0915 06:30:56.719736   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.720131   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.720166   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.720394   13942 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/config.json ...
	I0915 06:30:56.720616   13942 start.go:128] duration metric: took 22.152940074s to createHost
	I0915 06:30:56.720641   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.722803   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.723117   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.723157   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.723308   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.723466   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.723612   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.723752   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.723900   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:56.724053   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:56.724062   13942 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 06:30:56.826287   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726381856.792100710
	
	I0915 06:30:56.826308   13942 fix.go:216] guest clock: 1726381856.792100710
	I0915 06:30:56.826317   13942 fix.go:229] Guest: 2024-09-15 06:30:56.79210071 +0000 UTC Remote: 2024-09-15 06:30:56.720628741 +0000 UTC m=+22.251007338 (delta=71.471969ms)
	I0915 06:30:56.826365   13942 fix.go:200] guest clock delta is within tolerance: 71.471969ms
	I0915 06:30:56.826373   13942 start.go:83] releasing machines lock for "addons-368929", held for 22.25878368s
	I0915 06:30:56.826395   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.826655   13942 main.go:141] libmachine: (addons-368929) Calling .GetIP
	I0915 06:30:56.828977   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.829310   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.829334   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.829599   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.830090   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.830276   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.830359   13942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 06:30:56.830415   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.830460   13942 ssh_runner.go:195] Run: cat /version.json
	I0915 06:30:56.830484   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.833094   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.833320   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.833452   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.833493   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.833613   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.833768   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.833779   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.833801   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.833988   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.833998   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.834119   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.834185   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:30:56.834246   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.834495   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:30:56.938490   13942 ssh_runner.go:195] Run: systemctl --version
	I0915 06:30:56.944445   13942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 06:30:57.102745   13942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 06:30:57.108913   13942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 06:30:57.108984   13942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:30:57.124469   13942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 06:30:57.124494   13942 start.go:495] detecting cgroup driver to use...
	I0915 06:30:57.124559   13942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 06:30:57.141386   13942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 06:30:57.155119   13942 docker.go:217] disabling cri-docker service (if available) ...
	I0915 06:30:57.155185   13942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 06:30:57.168695   13942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 06:30:57.182111   13942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 06:30:57.306290   13942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 06:30:57.442868   13942 docker.go:233] disabling docker service ...
	I0915 06:30:57.442931   13942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 06:30:57.456992   13942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 06:30:57.470375   13942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 06:30:57.613118   13942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 06:30:57.736610   13942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 06:30:57.750704   13942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:30:57.769455   13942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 06:30:57.769509   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.779795   13942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 06:30:57.779873   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.790360   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.800573   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.811474   13942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 06:30:57.822289   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.832671   13942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.849736   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.860236   13942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 06:30:57.869843   13942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0915 06:30:57.869913   13942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0915 06:30:57.883852   13942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 06:30:57.893890   13942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:58.013644   13942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 06:30:58.112843   13942 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 06:30:58.112948   13942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 06:30:58.119889   13942 start.go:563] Will wait 60s for crictl version
	I0915 06:30:58.119973   13942 ssh_runner.go:195] Run: which crictl
	I0915 06:30:58.123756   13942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 06:30:58.159622   13942 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 06:30:58.159742   13942 ssh_runner.go:195] Run: crio --version
	I0915 06:30:58.186651   13942 ssh_runner.go:195] Run: crio --version
	I0915 06:30:58.215616   13942 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 06:30:58.216928   13942 main.go:141] libmachine: (addons-368929) Calling .GetIP
	I0915 06:30:58.219246   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:58.219519   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:58.219540   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:58.219725   13942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0915 06:30:58.223999   13942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:30:58.236938   13942 kubeadm.go:883] updating cluster {Name:addons-368929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 06:30:58.237037   13942 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:30:58.237078   13942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:30:58.273590   13942 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0915 06:30:58.273648   13942 ssh_runner.go:195] Run: which lz4
	I0915 06:30:58.277802   13942 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0915 06:30:58.282345   13942 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0915 06:30:58.282370   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0915 06:30:59.603321   13942 crio.go:462] duration metric: took 1.325549194s to copy over tarball
	I0915 06:30:59.603391   13942 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0915 06:31:01.698248   13942 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.094830019s)
	I0915 06:31:01.698276   13942 crio.go:469] duration metric: took 2.094925403s to extract the tarball
	I0915 06:31:01.698286   13942 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0915 06:31:01.735576   13942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:31:01.777236   13942 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:31:01.777262   13942 cache_images.go:84] Images are preloaded, skipping loading
	I0915 06:31:01.777272   13942 kubeadm.go:934] updating node { 192.168.39.212 8443 v1.31.1 crio true true} ...
	I0915 06:31:01.777361   13942 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-368929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 06:31:01.777425   13942 ssh_runner.go:195] Run: crio config
	I0915 06:31:01.819719   13942 cni.go:84] Creating CNI manager for ""
	I0915 06:31:01.819741   13942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 06:31:01.819753   13942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 06:31:01.819775   13942 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-368929 NodeName:addons-368929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 06:31:01.819928   13942 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-368929"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 06:31:01.820001   13942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 06:31:01.830202   13942 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 06:31:01.830264   13942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 06:31:01.840653   13942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0915 06:31:01.859116   13942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 06:31:01.876520   13942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0915 06:31:01.893776   13942 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I0915 06:31:01.897643   13942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:31:01.910584   13942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:31:02.038664   13942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:31:02.055783   13942 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929 for IP: 192.168.39.212
	I0915 06:31:02.055810   13942 certs.go:194] generating shared ca certs ...
	I0915 06:31:02.055829   13942 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.055990   13942 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 06:31:02.153706   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt ...
	I0915 06:31:02.153733   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt: {Name:mk72efeae7a5e079e02dddca5ae1326e66b50791 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.153893   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key ...
	I0915 06:31:02.153904   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key: {Name:mk60adb75b67a4ecb03ce39bc98fc22d93504324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.153974   13942 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 06:31:02.294105   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt ...
	I0915 06:31:02.294129   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt: {Name:mk6ad9572391112128f71a73d401b2f36e5187ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.294270   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key ...
	I0915 06:31:02.294280   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key: {Name:mk997129f7d8042b546775ee409cc0c02ea66874 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.294341   13942 certs.go:256] generating profile certs ...
	I0915 06:31:02.294402   13942 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.key
	I0915 06:31:02.294422   13942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt with IP's: []
	I0915 06:31:02.474521   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt ...
	I0915 06:31:02.474552   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: {Name:mk5230116ec10f82362ea4d2c021febd7553501e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.474711   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.key ...
	I0915 06:31:02.474722   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.key: {Name:mk4c7cfc18d39b7a5234396e9e59579ecd48ad76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.474787   13942 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key.2bf3d68f
	I0915 06:31:02.474804   13942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt.2bf3d68f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212]
	I0915 06:31:02.564099   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt.2bf3d68f ...
	I0915 06:31:02.564130   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt.2bf3d68f: {Name:mkc23c9f9e76c0a988b86d564062dd840e1d35eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.564279   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key.2bf3d68f ...
	I0915 06:31:02.564291   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key.2bf3d68f: {Name:mk4e887c90c5c7adca7e638dabe3b3c3ddd2bf81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.564361   13942 certs.go:381] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt.2bf3d68f -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt
	I0915 06:31:02.564435   13942 certs.go:385] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key.2bf3d68f -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key
	I0915 06:31:02.564480   13942 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.key
	I0915 06:31:02.564496   13942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.crt with IP's: []
	I0915 06:31:02.689851   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.crt ...
	I0915 06:31:02.689879   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.crt: {Name:mk64a1aa0a2a68e9a444363c01c5932bf3e0851a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.690029   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.key ...
	I0915 06:31:02.690039   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.key: {Name:mk7c8d3875c49566ea32a3445025bddf158772fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.690216   13942 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 06:31:02.690247   13942 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 06:31:02.690274   13942 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 06:31:02.690296   13942 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 06:31:02.690807   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 06:31:02.716623   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 06:31:02.745150   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 06:31:02.773869   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 06:31:02.798062   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 06:31:02.820956   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 06:31:02.844972   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 06:31:02.869179   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 06:31:02.893630   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 06:31:02.917474   13942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 06:31:02.934168   13942 ssh_runner.go:195] Run: openssl version
	I0915 06:31:02.940062   13942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 06:31:02.951007   13942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:31:02.955419   13942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:31:02.955475   13942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:31:02.961175   13942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 06:31:02.972122   13942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:31:02.976566   13942 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 06:31:02.976612   13942 kubeadm.go:392] StartCluster: {Name:addons-368929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:31:02.976677   13942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 06:31:02.976718   13942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 06:31:03.012559   13942 cri.go:89] found id: ""
	I0915 06:31:03.012619   13942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 06:31:03.022968   13942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 06:31:03.032884   13942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 06:31:03.042781   13942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 06:31:03.042798   13942 kubeadm.go:157] found existing configuration files:
	
	I0915 06:31:03.042840   13942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 06:31:03.052268   13942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 06:31:03.052318   13942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 06:31:03.062232   13942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 06:31:03.071324   13942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 06:31:03.071379   13942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 06:31:03.080551   13942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 06:31:03.089375   13942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 06:31:03.089424   13942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 06:31:03.099002   13942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 06:31:03.108163   13942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 06:31:03.108213   13942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 06:31:03.117874   13942 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 06:31:03.179081   13942 kubeadm.go:310] W0915 06:31:03.150215     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:31:03.179952   13942 kubeadm.go:310] W0915 06:31:03.151258     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:31:03.288765   13942 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 06:31:13.244212   13942 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 06:31:13.244285   13942 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 06:31:13.244371   13942 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 06:31:13.244504   13942 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 06:31:13.244637   13942 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 06:31:13.244724   13942 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 06:31:13.246462   13942 out.go:235]   - Generating certificates and keys ...
	I0915 06:31:13.246540   13942 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 06:31:13.246602   13942 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 06:31:13.246676   13942 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 06:31:13.246741   13942 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 06:31:13.246798   13942 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 06:31:13.246841   13942 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 06:31:13.246910   13942 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 06:31:13.247029   13942 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-368929 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0915 06:31:13.247105   13942 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 06:31:13.247259   13942 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-368929 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0915 06:31:13.247354   13942 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 06:31:13.247454   13942 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 06:31:13.247496   13942 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 06:31:13.247569   13942 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 06:31:13.247649   13942 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 06:31:13.247737   13942 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 06:31:13.247812   13942 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 06:31:13.247905   13942 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 06:31:13.247987   13942 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 06:31:13.248103   13942 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 06:31:13.248230   13942 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 06:31:13.249711   13942 out.go:235]   - Booting up control plane ...
	I0915 06:31:13.249799   13942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 06:31:13.249895   13942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 06:31:13.249949   13942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 06:31:13.250075   13942 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 06:31:13.250170   13942 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 06:31:13.250212   13942 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 06:31:13.250324   13942 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 06:31:13.250471   13942 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 06:31:13.250554   13942 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000955995s
	I0915 06:31:13.250648   13942 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 06:31:13.250740   13942 kubeadm.go:310] [api-check] The API server is healthy after 5.001828524s
	I0915 06:31:13.250879   13942 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 06:31:13.250988   13942 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 06:31:13.251068   13942 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 06:31:13.251284   13942 kubeadm.go:310] [mark-control-plane] Marking the node addons-368929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 06:31:13.251342   13942 kubeadm.go:310] [bootstrap-token] Using token: 0sj1hx.q1rkmq819x572pmn
	I0915 06:31:13.252875   13942 out.go:235]   - Configuring RBAC rules ...
	I0915 06:31:13.253007   13942 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 06:31:13.253098   13942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 06:31:13.253263   13942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 06:31:13.253367   13942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 06:31:13.253467   13942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 06:31:13.253534   13942 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 06:31:13.253646   13942 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 06:31:13.253696   13942 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 06:31:13.253766   13942 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 06:31:13.253779   13942 kubeadm.go:310] 
	I0915 06:31:13.253880   13942 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 06:31:13.253892   13942 kubeadm.go:310] 
	I0915 06:31:13.253965   13942 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 06:31:13.253973   13942 kubeadm.go:310] 
	I0915 06:31:13.253994   13942 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 06:31:13.254066   13942 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 06:31:13.254144   13942 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 06:31:13.254155   13942 kubeadm.go:310] 
	I0915 06:31:13.254229   13942 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 06:31:13.254238   13942 kubeadm.go:310] 
	I0915 06:31:13.254305   13942 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 06:31:13.254315   13942 kubeadm.go:310] 
	I0915 06:31:13.254361   13942 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 06:31:13.254433   13942 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 06:31:13.254531   13942 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 06:31:13.254543   13942 kubeadm.go:310] 
	I0915 06:31:13.254651   13942 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 06:31:13.254721   13942 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 06:31:13.254735   13942 kubeadm.go:310] 
	I0915 06:31:13.254843   13942 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0sj1hx.q1rkmq819x572pmn \
	I0915 06:31:13.254928   13942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b \
	I0915 06:31:13.254946   13942 kubeadm.go:310] 	--control-plane 
	I0915 06:31:13.254952   13942 kubeadm.go:310] 
	I0915 06:31:13.255027   13942 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 06:31:13.255036   13942 kubeadm.go:310] 
	I0915 06:31:13.255108   13942 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0sj1hx.q1rkmq819x572pmn \
	I0915 06:31:13.255213   13942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b 
	I0915 06:31:13.255224   13942 cni.go:84] Creating CNI manager for ""
	I0915 06:31:13.255230   13942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 06:31:13.256846   13942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 06:31:13.258367   13942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 06:31:13.269533   13942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 06:31:13.286955   13942 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 06:31:13.287033   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:13.287047   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-368929 minikube.k8s.io/updated_at=2024_09_15T06_31_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=addons-368929 minikube.k8s.io/primary=true
	I0915 06:31:13.439577   13942 ops.go:34] apiserver oom_adj: -16
	I0915 06:31:13.439619   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:13.939804   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:14.440122   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:14.939768   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:15.440612   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:15.940408   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:16.439804   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:16.940340   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:17.440583   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:17.530382   13942 kubeadm.go:1113] duration metric: took 4.243409251s to wait for elevateKubeSystemPrivileges
	I0915 06:31:17.530429   13942 kubeadm.go:394] duration metric: took 14.553819023s to StartCluster
	I0915 06:31:17.530452   13942 settings.go:142] acquiring lock: {Name:mkf5235d72fa0db4ee272126c244284fe5de298a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:17.530582   13942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 06:31:17.530898   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/kubeconfig: {Name:mk08ce8dc701ab8f3c73b1f6ae730da0fbf561bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:17.531115   13942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 06:31:17.531117   13942 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:31:17.531135   13942 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 06:31:17.531245   13942 addons.go:69] Setting yakd=true in profile "addons-368929"
	I0915 06:31:17.531264   13942 addons.go:234] Setting addon yakd=true in "addons-368929"
	I0915 06:31:17.531291   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531295   13942 config.go:182] Loaded profile config "addons-368929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:31:17.531303   13942 addons.go:69] Setting ingress-dns=true in profile "addons-368929"
	I0915 06:31:17.531317   13942 addons.go:69] Setting default-storageclass=true in profile "addons-368929"
	I0915 06:31:17.531326   13942 addons.go:234] Setting addon ingress-dns=true in "addons-368929"
	I0915 06:31:17.531335   13942 addons.go:69] Setting metrics-server=true in profile "addons-368929"
	I0915 06:31:17.531338   13942 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-368929"
	I0915 06:31:17.531337   13942 addons.go:69] Setting registry=true in profile "addons-368929"
	I0915 06:31:17.531349   13942 addons.go:234] Setting addon metrics-server=true in "addons-368929"
	I0915 06:31:17.531345   13942 addons.go:69] Setting inspektor-gadget=true in profile "addons-368929"
	I0915 06:31:17.531359   13942 addons.go:234] Setting addon registry=true in "addons-368929"
	I0915 06:31:17.531366   13942 addons.go:234] Setting addon inspektor-gadget=true in "addons-368929"
	I0915 06:31:17.531374   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531366   13942 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-368929"
	I0915 06:31:17.531389   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531390   13942 addons.go:69] Setting ingress=true in profile "addons-368929"
	I0915 06:31:17.531398   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531406   13942 addons.go:234] Setting addon ingress=true in "addons-368929"
	I0915 06:31:17.531416   13942 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-368929"
	I0915 06:31:17.531429   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531441   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531763   13942 addons.go:69] Setting storage-provisioner=true in profile "addons-368929"
	I0915 06:31:17.531769   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531778   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531782   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531785   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531825   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531788   13942 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-368929"
	I0915 06:31:17.531921   13942 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-368929"
	I0915 06:31:17.531375   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531784   13942 addons.go:234] Setting addon storage-provisioner=true in "addons-368929"
	I0915 06:31:17.532163   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531796   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531381   13942 addons.go:69] Setting gcp-auth=true in profile "addons-368929"
	I0915 06:31:17.532282   13942 mustload.go:65] Loading cluster: addons-368929
	I0915 06:31:17.532299   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.532333   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.532362   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.532377   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.532462   13942 config.go:182] Loaded profile config "addons-368929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:31:17.532536   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.532574   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531801   13942 addons.go:69] Setting volcano=true in profile "addons-368929"
	I0915 06:31:17.532649   13942 addons.go:234] Setting addon volcano=true in "addons-368929"
	I0915 06:31:17.532676   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531802   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.532807   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.532834   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.533044   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531799   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.533082   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.533100   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531802   13942 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-368929"
	I0915 06:31:17.533268   13942 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-368929"
	I0915 06:31:17.533292   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531799   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.533422   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531386   13942 addons.go:69] Setting helm-tiller=true in profile "addons-368929"
	I0915 06:31:17.533579   13942 addons.go:234] Setting addon helm-tiller=true in "addons-368929"
	I0915 06:31:17.533603   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.533660   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.533677   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531808   13942 addons.go:69] Setting cloud-spanner=true in profile "addons-368929"
	I0915 06:31:17.533996   13942 addons.go:234] Setting addon cloud-spanner=true in "addons-368929"
	I0915 06:31:17.534023   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531808   13942 addons.go:69] Setting volumesnapshots=true in profile "addons-368929"
	I0915 06:31:17.534072   13942 addons.go:234] Setting addon volumesnapshots=true in "addons-368929"
	I0915 06:31:17.534098   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.534391   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.534396   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.534404   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.534410   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.544717   13942 out.go:177] * Verifying Kubernetes components...
	I0915 06:31:17.531817   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531900   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.546517   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.551069   13942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:31:17.552863   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0915 06:31:17.552873   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36587
	I0915 06:31:17.553975   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0915 06:31:17.554008   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39267
	I0915 06:31:17.554479   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.554606   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.554630   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.554982   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.555001   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.555033   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43615
	I0915 06:31:17.555190   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.555399   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.555473   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.556128   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.556141   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.556194   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.556312   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.556324   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.556379   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.556441   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.556504   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.556665   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.557213   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.557249   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.560223   13942 addons.go:234] Setting addon default-storageclass=true in "addons-368929"
	I0915 06:31:17.560260   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.560623   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.560654   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.562235   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.562259   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.562337   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.562459   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.562469   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.564071   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.564137   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.564190   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0915 06:31:17.564701   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.564732   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.565696   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.565803   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0915 06:31:17.566345   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.566413   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.566440   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.566451   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.566783   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.566811   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.568220   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.568238   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.568363   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.568373   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.568586   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.568722   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.575956   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.578834   13942 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-368929"
	I0915 06:31:17.578915   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.579206   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.579264   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.586757   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45717
	I0915 06:31:17.587453   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.587903   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.587915   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.588249   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.588667   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.588681   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.589379   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0915 06:31:17.591499   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43375
	I0915 06:31:17.592121   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.592540   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35411
	I0915 06:31:17.592775   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.592797   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.593043   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.593129   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.593632   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.593670   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.594252   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.594269   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.594288   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.594321   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.594721   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.595309   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.595327   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.595709   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.596188   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.597875   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.598751   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.599189   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.599228   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.600165   13942 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:31:17.601729   13942 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:31:17.601752   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 06:31:17.601771   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.605356   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.605714   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.605733   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.606017   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.606225   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.606363   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.606488   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.608862   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0915 06:31:17.609332   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.609839   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.609855   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.610126   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0915 06:31:17.610221   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.610370   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.610667   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.611155   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.611171   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.611594   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.612184   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.612207   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.612239   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.613742   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I0915 06:31:17.614273   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.614432   13942 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 06:31:17.614832   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.614856   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.615194   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.615706   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.615749   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.615938   13942 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:31:17.615956   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 06:31:17.615977   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.618736   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I0915 06:31:17.619406   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.619549   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.619887   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.619906   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.619934   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I0915 06:31:17.619991   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.620005   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.620125   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.620284   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.620306   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.620389   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.620439   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.620546   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.620912   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.620929   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.621094   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.621127   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.621225   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.621390   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.623143   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.624009   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0915 06:31:17.624078   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0915 06:31:17.624757   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.625302   13942 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 06:31:17.625323   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.625341   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.625640   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.626189   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.626227   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.626492   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0915 06:31:17.626724   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.627122   13942 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 06:31:17.627137   13942 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 06:31:17.627150   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.627443   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.627842   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.627858   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.628226   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.628780   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.628824   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.629909   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0915 06:31:17.630269   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.630711   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.630727   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.630778   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.630930   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.630947   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.631293   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.631304   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.631320   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.631317   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.631497   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.631668   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.632017   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.632057   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.632337   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.632451   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.635887   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I0915 06:31:17.636212   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.636655   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.636671   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.637236   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.637272   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.637490   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.637666   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.639294   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.641479   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 06:31:17.642960   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 06:31:17.642978   13942 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 06:31:17.643001   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.646117   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.646502   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.646522   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.646795   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.647022   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.647177   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.647337   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.650261   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38061
	I0915 06:31:17.652110   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0915 06:31:17.652286   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43539
	I0915 06:31:17.652480   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0915 06:31:17.652627   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.652721   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.653099   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.653125   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.653192   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.653334   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.653346   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.653410   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.653645   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.653768   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.653779   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.655709   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45143
	I0915 06:31:17.655715   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.655739   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.655715   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.655788   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.655803   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.655938   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.656099   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.656181   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.656265   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.656670   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.656688   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.656739   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.658305   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.658369   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.658421   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.659062   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.659317   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.660430   13942 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0915 06:31:17.660468   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 06:31:17.660448   13942 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 06:31:17.660836   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.661129   13942 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 06:31:17.661142   13942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 06:31:17.661158   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.661714   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0915 06:31:17.662496   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.662785   13942 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0915 06:31:17.662803   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0915 06:31:17.662819   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.663231   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.663251   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.663851   13942 out.go:177]   - Using image docker.io/busybox:stable
	I0915 06:31:17.663851   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 06:31:17.665526   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 06:31:17.665540   13942 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:31:17.665573   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 06:31:17.665590   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.666518   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.667159   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.667209   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.667518   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.667975   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 06:31:17.668218   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.668405   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.668924   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.668959   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.669158   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.669315   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.669371   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.669386   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.669496   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.669832   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.670044   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.670171   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.670275   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.670559   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 06:31:17.671441   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.672225   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.672238   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.672404   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.672567   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.672724   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.672859   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.673060   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32869
	I0915 06:31:17.673167   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36501
	I0915 06:31:17.673197   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 06:31:17.673464   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.673593   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.674180   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.674197   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.674602   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.674866   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.674970   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0915 06:31:17.676021   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.676113   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0915 06:31:17.676325   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.676341   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.676424   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0915 06:31:17.676962   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.677312   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.677398   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.677414   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.677562   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.677584   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.677859   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.678040   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.678549   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I0915 06:31:17.679078   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.679084   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0915 06:31:17.679106   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.679181   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.679630   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.679647   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.679655   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 06:31:17.679706   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.679708   13942 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 06:31:17.679755   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.679825   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.679985   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.680370   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.680389   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.680459   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.680667   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.680925   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.681204   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 06:31:17.681687   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 06:31:17.681708   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.681215   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.681887   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.682415   13942 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 06:31:17.682606   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.682597   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.682703   13942 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 06:31:17.683242   13942 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 06:31:17.684048   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 06:31:17.684064   13942 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 06:31:17.684082   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.684243   13942 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 06:31:17.684541   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.685180   13942 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 06:31:17.685283   13942 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 06:31:17.685293   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 06:31:17.685309   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.685970   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.687103   13942 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 06:31:17.687121   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 06:31:17.687139   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.687240   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.687254   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.687302   13942 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 06:31:17.687385   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.687553   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.687909   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.688208   13942 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:31:17.688311   13942 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 06:31:17.688326   13942 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 06:31:17.688342   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.688954   13942 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:31:17.688971   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 06:31:17.688986   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.689661   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.689991   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.690025   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.690043   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.690736   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.691322   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.691400   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.691795   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.691992   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.692014   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.692081   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.692213   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.692319   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.692403   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.692846   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.693446   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.694070   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.694103   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.694326   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.694567   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.694594   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.694685   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.694776   13942 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:31:17.694916   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.694936   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.694974   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.695197   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.695468   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.695660   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.695792   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.696758   13942 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:31:17.696772   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 06:31:17.696794   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.696898   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0915 06:31:17.697337   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.697347   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.697891   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.697904   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.698246   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.698537   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.698553   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.698595   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.698766   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.698883   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.698993   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.699115   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.699868   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.700039   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.700238   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:17.700244   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:17.700527   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.700539   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.700557   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:17.700564   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:17.700571   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:17.700585   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:17.700712   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.700759   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:17.700775   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:17.700779   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	W0915 06:31:17.700830   13942 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0915 06:31:17.701036   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.701127   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.701199   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.974082   13942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:31:17.974246   13942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 06:31:18.029440   13942 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0915 06:31:18.029460   13942 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0915 06:31:18.067088   13942 node_ready.go:35] waiting up to 6m0s for node "addons-368929" to be "Ready" ...
	I0915 06:31:18.078224   13942 node_ready.go:49] node "addons-368929" has status "Ready":"True"
	I0915 06:31:18.078251   13942 node_ready.go:38] duration metric: took 11.135756ms for node "addons-368929" to be "Ready" ...
	I0915 06:31:18.078264   13942 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:31:18.135940   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 06:31:18.135964   13942 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 06:31:18.141367   13942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:18.199001   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:31:18.204686   13942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 06:31:18.204710   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 06:31:18.222305   13942 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 06:31:18.222333   13942 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 06:31:18.235001   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:31:18.242915   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:31:18.264618   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 06:31:18.264645   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 06:31:18.278064   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:31:18.295028   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:31:18.313913   13942 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 06:31:18.313945   13942 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 06:31:18.321100   13942 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:31:18.321126   13942 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0915 06:31:18.324341   13942 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 06:31:18.324361   13942 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 06:31:18.342086   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:31:18.355928   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 06:31:18.386848   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 06:31:18.386873   13942 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 06:31:18.430309   13942 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 06:31:18.430338   13942 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 06:31:18.436199   13942 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:31:18.436227   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 06:31:18.467018   13942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 06:31:18.467043   13942 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 06:31:18.469097   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 06:31:18.469118   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 06:31:18.475758   13942 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 06:31:18.475776   13942 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 06:31:18.524849   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:31:18.559766   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 06:31:18.559796   13942 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 06:31:18.574119   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:31:18.629489   13942 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 06:31:18.629514   13942 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 06:31:18.636860   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 06:31:18.636883   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 06:31:18.656652   13942 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 06:31:18.656681   13942 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 06:31:18.671346   13942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:31:18.671371   13942 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 06:31:18.776151   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:31:18.776174   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 06:31:18.786697   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 06:31:18.786725   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 06:31:18.790802   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 06:31:18.790824   13942 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 06:31:18.811252   13942 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 06:31:18.811276   13942 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 06:31:18.841135   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:31:18.940848   13942 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:31:18.940871   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 06:31:18.948147   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:31:18.968172   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 06:31:18.968200   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 06:31:19.099306   13942 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 06:31:19.099337   13942 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 06:31:19.208753   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:31:19.261571   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 06:31:19.261592   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 06:31:19.427555   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 06:31:19.427591   13942 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 06:31:19.452460   13942 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 06:31:19.452489   13942 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 06:31:19.729819   13942 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.75553178s)
	I0915 06:31:19.729857   13942 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0915 06:31:19.729914   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.530876961s)
	I0915 06:31:19.729955   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:19.729966   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:19.730363   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:19.730385   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:19.730385   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:19.730403   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:19.730418   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:19.730721   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:19.730736   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:19.737048   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:19.737066   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:19.737366   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:19.737390   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:19.737396   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:19.835914   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 06:31:19.835934   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 06:31:19.848468   13942 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:31:19.848493   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 06:31:20.068594   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:31:20.139377   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 06:31:20.139404   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 06:31:20.147456   13942 pod_ready.go:103] pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:20.234504   13942 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-368929" context rescaled to 1 replicas
	I0915 06:31:20.491704   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:31:20.491730   13942 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 06:31:20.932400   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:31:22.212244   13942 pod_ready.go:103] pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:22.409208   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.174166978s)
	I0915 06:31:22.409210   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.166269282s)
	I0915 06:31:22.409299   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:22.409318   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:22.409257   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:22.409391   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:22.409620   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:22.409658   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:22.409665   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:22.409672   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:22.409678   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:22.409744   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:22.409768   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:22.409783   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:22.409793   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:22.409801   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:22.410154   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:22.410195   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:22.410199   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:22.410217   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:22.410251   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:22.410221   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:24.154654   13942 pod_ready.go:93] pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:24.154684   13942 pod_ready.go:82] duration metric: took 6.01329144s for pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:24.154696   13942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:24.756169   13942 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 06:31:24.756215   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:24.759593   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:24.760038   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:24.760065   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:24.760279   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:24.760520   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:24.760709   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:24.760868   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:25.159761   13942 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 06:31:25.482013   13942 addons.go:234] Setting addon gcp-auth=true in "addons-368929"
	I0915 06:31:25.482064   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:25.482369   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:25.482396   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:25.497336   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I0915 06:31:25.497758   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:25.498209   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:25.498231   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:25.498517   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:25.499067   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:25.499103   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:25.514609   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39641
	I0915 06:31:25.515143   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:25.515688   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:25.515716   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:25.516029   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:25.516249   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:25.517863   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:25.518086   13942 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 06:31:25.518112   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:25.520701   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:25.521094   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:25.521124   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:25.521252   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:25.521421   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:25.521577   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:25.521709   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:26.232203   13942 pod_ready.go:103] pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:26.243417   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.965315482s)
	I0915 06:31:26.243453   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.948392742s)
	I0915 06:31:26.243471   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243480   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243483   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243491   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243629   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.901516275s)
	I0915 06:31:26.243667   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243675   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.887721395s)
	I0915 06:31:26.243697   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243713   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243752   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.718870428s)
	I0915 06:31:26.243780   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243794   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243853   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.243869   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.243874   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.669731672s)
	I0915 06:31:26.243878   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243886   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243891   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243899   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243677   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243962   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.243992   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.243998   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.244005   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244011   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244024   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.402863813s)
	I0915 06:31:26.244039   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244047   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244076   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.244093   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.244094   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.244103   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.244111   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244115   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.244121   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.244127   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.295952452s)
	I0915 06:31:26.244138   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244145   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244147   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244155   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244156   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244249   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.035463778s)
	W0915 06:31:26.244279   13942 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:31:26.244307   13942 retry.go:31] will retry after 256.93896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:31:26.244415   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.175791562s)
	I0915 06:31:26.244434   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244443   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.245740   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.245773   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.245783   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.245793   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.245803   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.245868   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.245878   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.245886   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.245892   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.245938   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.245963   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.245982   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.245990   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.245997   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.246004   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.246041   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246060   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246066   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246295   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246321   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246328   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246504   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246547   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246537   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246564   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246564   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246583   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246589   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246624   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246635   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246645   13942 addons.go:475] Verifying addon registry=true in "addons-368929"
	I0915 06:31:26.246763   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246789   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246797   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246808   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.246818   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.246946   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246973   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246979   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246987   13942 addons.go:475] Verifying addon metrics-server=true in "addons-368929"
	I0915 06:31:26.247083   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.247110   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.247120   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.248059   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.248078   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.248087   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.248095   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.248285   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.248299   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.248307   13942 addons.go:475] Verifying addon ingress=true in "addons-368929"
	I0915 06:31:26.248402   13942 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-368929 service yakd-dashboard -n yakd-dashboard
	
	I0915 06:31:26.248878   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.248901   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.250860   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.251360   13942 out.go:177] * Verifying registry addon...
	I0915 06:31:26.252258   13942 out.go:177] * Verifying ingress addon...
	I0915 06:31:26.253897   13942 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 06:31:26.254716   13942 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 06:31:26.282507   13942 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 06:31:26.282535   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:26.283231   13942 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:31:26.283254   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:26.326048   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.326076   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.326366   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.326389   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.502303   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:31:26.763104   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:26.763404   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:27.464589   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:27.465574   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:27.760221   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:27.760580   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.262507   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:28.263438   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.687007   13942 pod_ready.go:103] pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:28.777944   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:28.778464   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.790673   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.858215393s)
	I0915 06:31:28.790714   13942 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.272605642s)
	I0915 06:31:28.790731   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:28.790749   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:28.790820   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.288483379s)
	I0915 06:31:28.790865   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:28.790883   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:28.791037   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:28.791080   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:28.791088   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:28.791096   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:28.791102   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:28.791119   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:28.791129   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:28.791137   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:28.791143   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:28.791312   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:28.791359   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:28.791365   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:28.791374   13942 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-368929"
	I0915 06:31:28.791536   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:28.791550   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:28.792735   13942 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:31:28.793437   13942 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 06:31:28.795140   13942 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 06:31:28.795935   13942 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 06:31:28.796597   13942 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 06:31:28.796611   13942 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 06:31:28.830229   13942 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:31:28.830253   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:28.871919   13942 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 06:31:28.871943   13942 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 06:31:28.958746   13942 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:31:28.958766   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 06:31:28.979296   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:31:29.260856   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:29.260969   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:29.300857   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:29.763057   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:29.763185   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:29.815747   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.011418   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.032085812s)
	I0915 06:31:30.011471   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:30.011485   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:30.011741   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:30.011804   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:30.011820   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:30.011832   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:30.011842   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:30.012069   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:30.012085   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:30.014149   13942 addons.go:475] Verifying addon gcp-auth=true in "addons-368929"
	I0915 06:31:30.015992   13942 out.go:177] * Verifying gcp-auth addon...
	I0915 06:31:30.018271   13942 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 06:31:30.051440   13942 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:31:30.051458   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:30.261829   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:30.261988   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:30.302477   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.525517   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:30.658488   13942 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-xbx5t" not found
	I0915 06:31:30.658511   13942 pod_ready.go:82] duration metric: took 6.503808371s for pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace to be "Ready" ...
	E0915 06:31:30.658521   13942 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-xbx5t" not found
	I0915 06:31:30.658528   13942 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.665242   13942 pod_ready.go:93] pod "etcd-addons-368929" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.665263   13942 pod_ready.go:82] duration metric: took 6.72824ms for pod "etcd-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.665272   13942 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.671635   13942 pod_ready.go:93] pod "kube-apiserver-addons-368929" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.671653   13942 pod_ready.go:82] duration metric: took 6.375828ms for pod "kube-apiserver-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.671661   13942 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.678724   13942 pod_ready.go:93] pod "kube-controller-manager-addons-368929" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.678750   13942 pod_ready.go:82] duration metric: took 7.08028ms for pod "kube-controller-manager-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.678762   13942 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ldpsk" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.687370   13942 pod_ready.go:93] pod "kube-proxy-ldpsk" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.687396   13942 pod_ready.go:82] duration metric: took 8.62656ms for pod "kube-proxy-ldpsk" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.687405   13942 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.767076   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:30.767584   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:30.800983   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.859527   13942 pod_ready.go:93] pod "kube-scheduler-addons-368929" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.859556   13942 pod_ready.go:82] duration metric: took 172.143761ms for pod "kube-scheduler-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.859566   13942 pod_ready.go:39] duration metric: took 12.781287726s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:31:30.859585   13942 api_server.go:52] waiting for apiserver process to appear ...
	I0915 06:31:30.859643   13942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:31:30.917869   13942 api_server.go:72] duration metric: took 13.386663133s to wait for apiserver process to appear ...
	I0915 06:31:30.917897   13942 api_server.go:88] waiting for apiserver healthz status ...
	I0915 06:31:30.917922   13942 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0915 06:31:30.923875   13942 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I0915 06:31:30.924981   13942 api_server.go:141] control plane version: v1.31.1
	I0915 06:31:30.924999   13942 api_server.go:131] duration metric: took 7.095604ms to wait for apiserver health ...
	I0915 06:31:30.925006   13942 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 06:31:31.022799   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:31.064433   13942 system_pods.go:59] 18 kube-system pods found
	I0915 06:31:31.064467   13942 system_pods.go:61] "coredns-7c65d6cfc9-d42kz" [df259178-5edc-4af0-97ba-206daeab8c29] Running
	I0915 06:31:31.064479   13942 system_pods.go:61] "csi-hostpath-attacher-0" [0adda2d4-063c-4794-8f6b-ea93890a4674] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 06:31:31.064489   13942 system_pods.go:61] "csi-hostpath-resizer-0" [54b009bd-6cc0-49e7-82a2-9f7cf160569b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 06:31:31.064500   13942 system_pods.go:61] "csi-hostpathplugin-lsgqp" [7794aa6e-993e-4625-8fe9-562208645794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 06:31:31.064508   13942 system_pods.go:61] "etcd-addons-368929" [fd2748fc-bfea-4a7f-891d-99077f8233bf] Running
	I0915 06:31:31.064514   13942 system_pods.go:61] "kube-apiserver-addons-368929" [8ecbb12d-50b4-4d33-be92-d1430dbb9b31] Running
	I0915 06:31:31.064522   13942 system_pods.go:61] "kube-controller-manager-addons-368929" [966825ec-c456-4f8d-bb17-345e7ea3f48c] Running
	I0915 06:31:31.064529   13942 system_pods.go:61] "kube-ingress-dns-minikube" [ba1fa65c-7021-4ddf-a816-9f840f28af7d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0915 06:31:31.064539   13942 system_pods.go:61] "kube-proxy-ldpsk" [a2b364d0-170c-491f-a76a-1a9aac8268d1] Running
	I0915 06:31:31.064543   13942 system_pods.go:61] "kube-scheduler-addons-368929" [02b92939-9320-46e0-8afd-1f22d86465db] Running
	I0915 06:31:31.064549   13942 system_pods.go:61] "metrics-server-84c5f94fbc-2pshh" [0443fc45-c95c-4fab-9dfe-a1b598ac6c8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 06:31:31.064555   13942 system_pods.go:61] "nvidia-device-plugin-daemonset-kl795" [d0981521-b267-4cf9-82e3-73ca27f55631] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0915 06:31:31.064560   13942 system_pods.go:61] "registry-66c9cd494c-hbp2b" [29e66421-b96f-416d-b126-9c3b0d11bc7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0915 06:31:31.064566   13942 system_pods.go:61] "registry-proxy-ncp27" [cb62ce46-b1f1-4fef-8ada-8ff4f0dc35ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 06:31:31.064574   13942 system_pods.go:61] "snapshot-controller-56fcc65765-gpfpd" [b21fd3c8-1828-47d4-8c9d-3281ea26cc2e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:31.064586   13942 system_pods.go:61] "snapshot-controller-56fcc65765-nj866" [364b2721-2e61-435f-b087-0c183c2e9c65] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:31.064592   13942 system_pods.go:61] "storage-provisioner" [bf2fb433-e07a-4c6e-8438-67625e0215a8] Running
	I0915 06:31:31.064604   13942 system_pods.go:61] "tiller-deploy-b48cc5f79-cw67q" [6012a392-8d4a-4d69-a877-31fa7f992089] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0915 06:31:31.064613   13942 system_pods.go:74] duration metric: took 139.600952ms to wait for pod list to return data ...
	I0915 06:31:31.064626   13942 default_sa.go:34] waiting for default service account to be created ...
	I0915 06:31:31.258650   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:31.259446   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:31.259836   13942 default_sa.go:45] found service account: "default"
	I0915 06:31:31.259856   13942 default_sa.go:55] duration metric: took 195.22286ms for default service account to be created ...
	I0915 06:31:31.259867   13942 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 06:31:31.300588   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:31.464010   13942 system_pods.go:86] 18 kube-system pods found
	I0915 06:31:31.464039   13942 system_pods.go:89] "coredns-7c65d6cfc9-d42kz" [df259178-5edc-4af0-97ba-206daeab8c29] Running
	I0915 06:31:31.464047   13942 system_pods.go:89] "csi-hostpath-attacher-0" [0adda2d4-063c-4794-8f6b-ea93890a4674] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 06:31:31.464055   13942 system_pods.go:89] "csi-hostpath-resizer-0" [54b009bd-6cc0-49e7-82a2-9f7cf160569b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 06:31:31.464062   13942 system_pods.go:89] "csi-hostpathplugin-lsgqp" [7794aa6e-993e-4625-8fe9-562208645794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 06:31:31.464067   13942 system_pods.go:89] "etcd-addons-368929" [fd2748fc-bfea-4a7f-891d-99077f8233bf] Running
	I0915 06:31:31.464072   13942 system_pods.go:89] "kube-apiserver-addons-368929" [8ecbb12d-50b4-4d33-be92-d1430dbb9b31] Running
	I0915 06:31:31.464079   13942 system_pods.go:89] "kube-controller-manager-addons-368929" [966825ec-c456-4f8d-bb17-345e7ea3f48c] Running
	I0915 06:31:31.464086   13942 system_pods.go:89] "kube-ingress-dns-minikube" [ba1fa65c-7021-4ddf-a816-9f840f28af7d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0915 06:31:31.464098   13942 system_pods.go:89] "kube-proxy-ldpsk" [a2b364d0-170c-491f-a76a-1a9aac8268d1] Running
	I0915 06:31:31.464106   13942 system_pods.go:89] "kube-scheduler-addons-368929" [02b92939-9320-46e0-8afd-1f22d86465db] Running
	I0915 06:31:31.464114   13942 system_pods.go:89] "metrics-server-84c5f94fbc-2pshh" [0443fc45-c95c-4fab-9dfe-a1b598ac6c8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 06:31:31.464127   13942 system_pods.go:89] "nvidia-device-plugin-daemonset-kl795" [d0981521-b267-4cf9-82e3-73ca27f55631] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0915 06:31:31.464136   13942 system_pods.go:89] "registry-66c9cd494c-hbp2b" [29e66421-b96f-416d-b126-9c3b0d11bc7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0915 06:31:31.464145   13942 system_pods.go:89] "registry-proxy-ncp27" [cb62ce46-b1f1-4fef-8ada-8ff4f0dc35ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 06:31:31.464153   13942 system_pods.go:89] "snapshot-controller-56fcc65765-gpfpd" [b21fd3c8-1828-47d4-8c9d-3281ea26cc2e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:31.464161   13942 system_pods.go:89] "snapshot-controller-56fcc65765-nj866" [364b2721-2e61-435f-b087-0c183c2e9c65] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:31.464166   13942 system_pods.go:89] "storage-provisioner" [bf2fb433-e07a-4c6e-8438-67625e0215a8] Running
	I0915 06:31:31.464172   13942 system_pods.go:89] "tiller-deploy-b48cc5f79-cw67q" [6012a392-8d4a-4d69-a877-31fa7f992089] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0915 06:31:31.464181   13942 system_pods.go:126] duration metric: took 204.307671ms to wait for k8s-apps to be running ...
	I0915 06:31:31.464191   13942 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 06:31:31.464244   13942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:31:31.486956   13942 system_svc.go:56] duration metric: took 22.754715ms WaitForService to wait for kubelet
	I0915 06:31:31.486990   13942 kubeadm.go:582] duration metric: took 13.955789555s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:31:31.487013   13942 node_conditions.go:102] verifying NodePressure condition ...
	I0915 06:31:31.522077   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:31.659879   13942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 06:31:31.659920   13942 node_conditions.go:123] node cpu capacity is 2
	I0915 06:31:31.659934   13942 node_conditions.go:105] duration metric: took 172.914644ms to run NodePressure ...
	I0915 06:31:31.659947   13942 start.go:241] waiting for startup goroutines ...
	I0915 06:31:31.759750   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:31.760177   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:31.800755   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:32.021954   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:32.259791   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:32.260569   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:32.300924   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:32.522475   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:32.759438   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:32.759934   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:32.800621   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:33.172220   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:33.271906   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:33.272260   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:33.302687   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:33.522439   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:33.763498   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:33.764289   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:33.801429   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:34.023038   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:34.259772   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:34.260041   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:34.300561   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:34.521913   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:34.759623   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:34.759710   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:34.800723   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:35.021779   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:35.260351   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:35.260447   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:35.299779   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:35.521913   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:35.760515   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:35.760927   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:35.800167   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:36.022203   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:36.257726   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:36.259665   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:36.299888   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:36.522528   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:36.758673   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:36.760425   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:36.801181   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:37.022185   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:37.258988   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:37.259048   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:37.300658   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:37.522233   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:37.757443   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:37.758723   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:37.800691   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:38.022095   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:38.257419   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:38.259009   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:38.300410   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:38.522197   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:38.757617   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:38.759144   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:38.800893   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:39.022318   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:39.261103   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:39.261240   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:39.300803   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:39.521354   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:39.759863   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:39.760107   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:39.802301   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:40.022269   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:40.257834   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:40.262295   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:40.300771   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:40.522661   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:40.759261   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:40.759486   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:40.801798   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:41.021829   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:41.289792   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:41.289896   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:41.301063   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:41.521512   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:41.761098   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:41.761110   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:41.801396   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:42.416726   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:42.417219   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:42.417240   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:42.417651   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:42.522481   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:42.760002   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:42.760206   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:42.801257   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:43.022267   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:43.257969   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:43.260312   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:43.304149   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:43.522666   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:43.759718   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:43.761579   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:43.800010   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:44.021599   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:44.258922   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:44.259066   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:44.300086   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:44.521602   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:44.758715   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:44.759687   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:44.801888   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:45.022545   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:45.258928   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:45.260028   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:45.300426   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:45.522347   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:45.757677   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:45.759429   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:45.801059   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:46.023666   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:46.259131   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:46.259319   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:46.301039   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:46.521574   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:46.758246   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:46.759289   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:46.800758   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:47.022872   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:47.700346   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:47.701683   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:47.701903   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:47.702433   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:47.759173   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:47.759895   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:47.861235   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:48.021603   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:48.259458   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:48.259485   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:48.300707   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:48.522271   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:48.762907   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:48.763255   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:48.800498   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:49.022348   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:49.257789   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:49.258990   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:49.300932   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:49.521296   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:49.759707   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:49.760030   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:49.801156   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:50.021582   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:50.259593   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:50.259614   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:50.300101   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:50.522458   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:50.758309   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:50.759307   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:50.801005   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:51.021667   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:51.258800   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:51.259754   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:51.300360   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:51.522137   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:51.916983   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:51.918391   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:51.918708   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:52.022345   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:52.257769   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:52.259200   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:52.300612   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:52.522624   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:52.759128   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:52.760003   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:52.800696   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:53.022034   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:53.258260   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:53.259030   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:53.299898   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:53.522948   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:53.758046   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:53.759190   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:53.801909   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:54.022611   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:54.258314   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:54.259394   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:54.299868   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:54.522225   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:54.759462   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:54.759954   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:54.800966   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:55.021560   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:55.259668   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:55.260096   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:55.300543   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:55.522930   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:55.759164   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:55.759630   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:55.800281   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:56.023274   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:56.258687   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:56.258983   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:56.300450   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:56.521941   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:56.758690   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:56.759184   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:56.800444   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:57.022085   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:57.328096   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:57.328128   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:57.328468   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:57.522064   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:57.758754   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:57.761358   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:57.801386   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:58.022197   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:58.259116   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:58.259355   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:58.301472   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:58.522238   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:58.757647   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:58.759138   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:58.800143   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:59.021428   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:59.259139   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:59.259914   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:59.300195   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:59.521969   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:59.757634   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:59.759388   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:59.801766   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:00.310029   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:00.310485   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:00.310541   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:00.310734   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:00.522275   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:00.757676   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:00.759851   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:00.800259   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:01.022105   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.263670   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:01.264256   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:01.363605   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:01.522274   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.758855   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:01.759192   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:01.800380   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:02.022392   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:02.258770   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:02.258779   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:02.300507   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:02.523063   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:02.757767   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:02.759609   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:02.800172   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:03.024853   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.258447   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:03.260135   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:03.301456   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:03.521270   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.759277   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:03.759579   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:03.859786   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:04.023200   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:04.259308   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:04.259454   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:04.302238   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:04.524167   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:04.759036   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:04.759483   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:04.800855   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:05.022461   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:05.257848   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:05.259070   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:05.300542   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:05.522141   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:05.757343   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:05.759078   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:05.800257   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:06.021588   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:06.259151   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:06.259229   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:06.301635   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:06.522501   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:06.760161   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:06.760475   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:06.800547   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:07.022162   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:07.260554   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:07.260733   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:07.300362   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:07.524441   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:07.757879   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:07.759690   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:07.799841   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:08.022590   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:08.258771   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:08.261346   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:08.300492   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:08.521937   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:08.760065   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:08.760608   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:08.800923   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:09.023054   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:09.258254   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:09.261196   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:09.303211   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:09.521992   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:09.759542   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:09.759968   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:09.800419   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:10.022241   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:10.257256   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:10.259665   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:10.301095   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:10.522381   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:10.758339   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:10.760016   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:10.800973   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:11.022131   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:11.257766   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:11.259848   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:11.300515   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:11.522584   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:11.759504   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:11.759819   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:11.800734   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:12.022702   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:12.259127   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:12.259205   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:12.301248   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:12.522307   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:12.759373   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:12.759784   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:12.800790   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:13.022473   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:13.258088   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:13.259484   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:13.301523   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:13.522640   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:13.760074   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:13.760590   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:13.861156   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:14.021516   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:14.259488   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:14.259642   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:14.300721   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:14.522807   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:14.778229   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:14.779139   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:14.873475   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:15.022821   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:15.259680   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:15.259809   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:15.300641   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:15.521637   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:15.758806   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:15.759633   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:15.800222   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:16.021553   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:16.259499   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:16.259517   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:16.299855   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:16.522762   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:16.759439   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:16.759858   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:16.800448   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:17.022916   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:17.269753   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:17.273875   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:17.311380   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:17.521792   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:17.757420   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:17.760061   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:17.800763   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:18.022671   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:18.260927   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:18.261314   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:18.360499   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:18.522431   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:18.758039   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:18.759972   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:18.800762   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:19.021770   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:19.258785   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:19.258915   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:19.300433   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:19.522477   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:19.758545   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:19.758909   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:19.799951   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:20.021583   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:20.258286   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:20.259349   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:20.300404   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:20.522162   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:20.757244   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:20.760035   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:20.800381   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:21.022666   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:21.259375   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:21.259813   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:21.299671   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:21.522782   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:21.759071   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:21.759579   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:21.801715   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:22.022489   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:22.258632   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:22.258786   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:22.301546   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:22.521535   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:22.757652   13942 kapi.go:107] duration metric: took 56.503752424s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 06:32:22.759703   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:22.800194   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:23.021373   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:23.259556   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:23.300956   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:23.522488   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:23.759651   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:23.950468   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:24.021780   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:24.259077   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:24.300587   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:24.522714   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:24.759126   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:24.801761   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:25.021962   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:25.258702   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:25.300610   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:25.527977   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:25.758500   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:25.801128   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:26.024917   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:26.258889   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:26.300533   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:26.531719   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:26.760215   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:26.861604   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:27.022469   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:27.259796   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:27.301694   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:27.522577   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:27.759608   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:27.799769   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:28.022221   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:28.260134   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:28.362251   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:28.522884   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:28.758529   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:28.800998   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:29.021597   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:29.260071   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:29.300411   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:29.521843   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:29.759942   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:29.808216   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:30.025869   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:30.258745   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:30.300960   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:30.526667   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:30.761078   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:30.808613   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:31.023050   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:31.258854   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:31.300480   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:31.522174   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:31.761507   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:31.800897   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:32.022757   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:32.261197   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:32.301193   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:32.522071   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:32.762443   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:32.801404   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:33.021999   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:33.260491   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:33.300695   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:33.525170   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:33.769640   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:33.868134   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:34.022189   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:34.260688   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:34.360810   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:34.525722   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:34.766523   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:34.805396   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:35.030161   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:35.258936   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:35.300824   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:35.522082   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:35.758581   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:35.801492   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:36.021288   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:36.259323   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:36.300415   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:36.522271   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:36.761188   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:36.800799   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:37.022023   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:37.262566   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:37.300820   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:37.522925   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:37.758831   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:37.799987   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:38.022158   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:38.260608   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:38.362196   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:38.521238   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:38.999060   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:38.999332   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:39.100770   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:39.267733   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:39.304254   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:39.527622   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:39.759148   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:39.801011   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:40.023997   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:40.258867   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:40.301651   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:40.521565   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:40.759515   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:40.800939   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:41.022706   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:41.259458   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:41.301688   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:41.806497   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:41.811813   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:41.812222   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:42.023382   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:42.267386   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:42.367885   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:42.525013   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:42.759644   13942 kapi.go:107] duration metric: took 1m16.504925037s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 06:32:42.800316   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:43.022950   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:43.300696   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:43.521739   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:43.802846   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:44.022227   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:44.300361   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:44.522479   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:44.802449   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:45.022566   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:45.300843   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:45.522072   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:45.800593   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:46.022008   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:46.301212   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:46.521319   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:46.800712   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:47.022599   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:47.301146   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:47.522228   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:47.801980   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:48.021550   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:48.301089   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:48.521254   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:48.802057   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:49.022313   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:49.307681   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:49.522886   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:49.803712   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:50.022128   13942 kapi.go:107] duration metric: took 1m20.003852984s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 06:32:50.023467   13942 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-368929 cluster.
	I0915 06:32:50.024716   13942 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 06:32:50.025878   13942 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 06:32:50.304584   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:50.803369   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:51.300707   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:51.801178   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:52.301423   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:52.801624   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:53.532327   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:53.810592   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:54.301743   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:54.800975   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:55.300394   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:55.800085   13942 kapi.go:107] duration metric: took 1m27.004147412s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 06:32:55.802070   13942 out.go:177] * Enabled addons: default-storageclass, ingress-dns, storage-provisioner, nvidia-device-plugin, helm-tiller, cloud-spanner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0915 06:32:55.803500   13942 addons.go:510] duration metric: took 1m38.272362908s for enable addons: enabled=[default-storageclass ingress-dns storage-provisioner nvidia-device-plugin helm-tiller cloud-spanner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0915 06:32:55.803536   13942 start.go:246] waiting for cluster config update ...
	I0915 06:32:55.803553   13942 start.go:255] writing updated cluster config ...
	I0915 06:32:55.803803   13942 ssh_runner.go:195] Run: rm -f paused
	I0915 06:32:55.854452   13942 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 06:32:55.856106   13942 out.go:177] * Done! kubectl is now configured to use "addons-368929" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.582390042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382531582363724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571196,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26a1dd7b-c87c-4b16-a186-dd845b564fc8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.583136166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7601fb29-7553-4c9d-bc42-4b385d962e42 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.583192544Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7601fb29-7553-4c9d-bc42-4b385d962e42 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.583785132Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00c6d745c3b5a950a7e3fbd43c2b7cfac54fea1278eba98b4dc91d0c78dc22af,PodSandboxId:56736db040b57431c2733303cf35ebe8f1fc747d2ab54ad0fa8f6dac00f4ba5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726382521302371284,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03db1b25-54f4-4882-85e5-a3edf2b37fd6,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f95d4b1acd0cfe082283caceb9b8c48fbcd59429c53c6a34e09fc96e2f7de2b,PodSandboxId:a28d53a27312fab68ef69af2a3d58223400e1627329d399ac69072362ebe172f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726382475938170869,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-37b863f6-d527-401f-89ba-956f4262c0c9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: cb7482c3-cc73-43ad-a8ef-c85a59f69fd4,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe7d5983063737dc6ef4772587346cb587515ab48cc7f9ff994f583c39407df,PodSandboxId:406f21cdf3d32f1b3a519b8e3845dbfe73127d84e2af02190caee396d0933630,Metadata:&ContainerMetadata{Name:gadget,Attempt:6,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:195d612ae7722fdfec0d582d74fde7db062c1655b60737ceedb14cd627d0d601,State:CONTAINER_EXITED,CreatedAt:1726382294067558076,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c49qm,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 39093114-74ee-4ef8-895c-6694ca3debde,},Annotations:map[string]string{io.kubernetes.container.ha
sh: f1a4d1ab,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73,PodSandboxId:4805b7ff0a6b13511a7dffc70f7fc5eabcad01548e746b45625d996cf9586d5f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726381968716024437,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-g2rmd,io.kubernetes.pod.namespace: gcp-auth,io.kubern
etes.pod.uid: c6416c0e-a192-4454-a335-5c49f36ea19b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b40c9a7366b024e571602d70abd817b4a21a408312f385d0e1574646576b528,PodSandboxId:412309d7483616884f103d2d26329e6fe69f68ffd44d25bfdec50e62cade888c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726381961930852416,Labels:map[string]string{io.kubernetes.container.name: controll
er,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-gcb5h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 14d82e54-1bb1-43c4-8e4d-d47f81096940,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:f89afc4680145c4669218863eb9115877d1c2a0299b1adad8a304633834b036c,PodSandboxId:8489a665b46d5be7194ece239a3e351b4db93e93d45e4be66f6493e37801900f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948279506841,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dd66v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e5a46bbd-02be-4c1f-aebb-00b53cf4c067,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20da8da7f0f5d980cd277a4df22df38e5e008aec108fbe15b44bf3378658b2a8,PodSandboxId:b5070610beb19bb8e2306348bcc578fc4045be505e52b56b7a20975f6dab4f8b,Metadata:&ContainerMetadata{Name:c
reate,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948158370782,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9mn4k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1086452a-e1cb-4387-bec2-242bcb5c68dc,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e762ef5d36b86f59e2d6a45c0c177b5566d12ec93c6b51499fcff97bb652b874,PodSandboxId:5465db13b3322a19437892b4a03612b20a842b3e0d1583d21bda911dad3ff
0b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726381907780581831,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2pshh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0443fc45-c95c-4fab-9dfe-a1b598ac6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:45670c34a0e6710c7a612c9ed83b68dbe0411ef0f267f64c17fb06f947b4c70e,PodSandboxId:0af78ecd18a259fa34c06eceb73560a45e814a32f80e6309aa3b0d892575d940,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726381895017166207,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1fa65c-7021-4ddf-a816-9f840f28af7d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2,PodSandboxId:14b4ae1ab9f1b40837daf07fd350cd8061ba56ec89554f5df3bf853ac9b4cc99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726381884397299457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2fb433-e07a-4c6e-8438-67625e0215a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e,PodSandboxId:b19df699e240af6c1f7248d381f4f7e7d3d3dabb3197c6c7f926b2633c740eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726381882734885829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d42kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df259178-5edc-4af0-97ba-206daeab8c29,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\
":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c,PodSandboxId:3090e56371ab7a7e97c0234d1b573ad0d6e27181cd3cda91d653a71adeffcb6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726381879932666993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldpsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b364d0-170c-491f-a76a-1a9aac8268d1,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da,PodSandboxId:d1a7384c192cbd7bd1bb33d7c4dce95c22863640a4af9f472c6eed350f797cb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726381867165201460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fabc84c41d45bf0e50c614cf9d14b6d5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2,PodSandboxId:c91b2b79714717505cb7b61f0859fccc03a24fdaf9deba663b2fffc57f8ca95b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726381867173545366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5080c7122a700b4220ec74fbddc5b38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io
.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a,PodSandboxId:ddbb5486a2f5fb850ea89d0599da3f76bb5f29363310953da92d44468b188272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726381867159171775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3109ca0208e4bc37d3c2b041acc81270,},Annotations:map[string]string{io.kubernetes.container.hash: d
1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070,PodSandboxId:801081b18db2cd25819255b4e10919986e6dae4caee2ad1d6838680996adcaf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726381867152114781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1518c4b384a8cdb6b825f3767bc485,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7601fb29-7553-4c9d-bc42-4b385d962e42 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.615923743Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5877f218-90db-43fe-b3bd-086151154edf name=/runtime.v1.RuntimeService/Version
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.616001165Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5877f218-90db-43fe-b3bd-086151154edf name=/runtime.v1.RuntimeService/Version
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.617214471Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49123201-5d38-42df-a72c-1222f243a94a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.618334259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382531618307099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571196,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49123201-5d38-42df-a72c-1222f243a94a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.618968173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6788e92-4430-4adb-9454-3ac9b7163ac7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.619026596Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6788e92-4430-4adb-9454-3ac9b7163ac7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.619425654Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00c6d745c3b5a950a7e3fbd43c2b7cfac54fea1278eba98b4dc91d0c78dc22af,PodSandboxId:56736db040b57431c2733303cf35ebe8f1fc747d2ab54ad0fa8f6dac00f4ba5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726382521302371284,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03db1b25-54f4-4882-85e5-a3edf2b37fd6,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f95d4b1acd0cfe082283caceb9b8c48fbcd59429c53c6a34e09fc96e2f7de2b,PodSandboxId:a28d53a27312fab68ef69af2a3d58223400e1627329d399ac69072362ebe172f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726382475938170869,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-37b863f6-d527-401f-89ba-956f4262c0c9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: cb7482c3-cc73-43ad-a8ef-c85a59f69fd4,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe7d5983063737dc6ef4772587346cb587515ab48cc7f9ff994f583c39407df,PodSandboxId:406f21cdf3d32f1b3a519b8e3845dbfe73127d84e2af02190caee396d0933630,Metadata:&ContainerMetadata{Name:gadget,Attempt:6,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:195d612ae7722fdfec0d582d74fde7db062c1655b60737ceedb14cd627d0d601,State:CONTAINER_EXITED,CreatedAt:1726382294067558076,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c49qm,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 39093114-74ee-4ef8-895c-6694ca3debde,},Annotations:map[string]string{io.kubernetes.container.ha
sh: f1a4d1ab,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73,PodSandboxId:4805b7ff0a6b13511a7dffc70f7fc5eabcad01548e746b45625d996cf9586d5f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726381968716024437,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-g2rmd,io.kubernetes.pod.namespace: gcp-auth,io.kubern
etes.pod.uid: c6416c0e-a192-4454-a335-5c49f36ea19b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b40c9a7366b024e571602d70abd817b4a21a408312f385d0e1574646576b528,PodSandboxId:412309d7483616884f103d2d26329e6fe69f68ffd44d25bfdec50e62cade888c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726381961930852416,Labels:map[string]string{io.kubernetes.container.name: controll
er,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-gcb5h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 14d82e54-1bb1-43c4-8e4d-d47f81096940,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:f89afc4680145c4669218863eb9115877d1c2a0299b1adad8a304633834b036c,PodSandboxId:8489a665b46d5be7194ece239a3e351b4db93e93d45e4be66f6493e37801900f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948279506841,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dd66v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e5a46bbd-02be-4c1f-aebb-00b53cf4c067,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20da8da7f0f5d980cd277a4df22df38e5e008aec108fbe15b44bf3378658b2a8,PodSandboxId:b5070610beb19bb8e2306348bcc578fc4045be505e52b56b7a20975f6dab4f8b,Metadata:&ContainerMetadata{Name:c
reate,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948158370782,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9mn4k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1086452a-e1cb-4387-bec2-242bcb5c68dc,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e762ef5d36b86f59e2d6a45c0c177b5566d12ec93c6b51499fcff97bb652b874,PodSandboxId:5465db13b3322a19437892b4a03612b20a842b3e0d1583d21bda911dad3ff
0b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726381907780581831,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2pshh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0443fc45-c95c-4fab-9dfe-a1b598ac6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:45670c34a0e6710c7a612c9ed83b68dbe0411ef0f267f64c17fb06f947b4c70e,PodSandboxId:0af78ecd18a259fa34c06eceb73560a45e814a32f80e6309aa3b0d892575d940,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726381895017166207,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1fa65c-7021-4ddf-a816-9f840f28af7d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2,PodSandboxId:14b4ae1ab9f1b40837daf07fd350cd8061ba56ec89554f5df3bf853ac9b4cc99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726381884397299457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2fb433-e07a-4c6e-8438-67625e0215a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e,PodSandboxId:b19df699e240af6c1f7248d381f4f7e7d3d3dabb3197c6c7f926b2633c740eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726381882734885829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d42kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df259178-5edc-4af0-97ba-206daeab8c29,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\
":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c,PodSandboxId:3090e56371ab7a7e97c0234d1b573ad0d6e27181cd3cda91d653a71adeffcb6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726381879932666993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldpsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b364d0-170c-491f-a76a-1a9aac8268d1,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da,PodSandboxId:d1a7384c192cbd7bd1bb33d7c4dce95c22863640a4af9f472c6eed350f797cb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726381867165201460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fabc84c41d45bf0e50c614cf9d14b6d5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2,PodSandboxId:c91b2b79714717505cb7b61f0859fccc03a24fdaf9deba663b2fffc57f8ca95b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726381867173545366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5080c7122a700b4220ec74fbddc5b38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io
.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a,PodSandboxId:ddbb5486a2f5fb850ea89d0599da3f76bb5f29363310953da92d44468b188272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726381867159171775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3109ca0208e4bc37d3c2b041acc81270,},Annotations:map[string]string{io.kubernetes.container.hash: d
1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070,PodSandboxId:801081b18db2cd25819255b4e10919986e6dae4caee2ad1d6838680996adcaf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726381867152114781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1518c4b384a8cdb6b825f3767bc485,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6788e92-4430-4adb-9454-3ac9b7163ac7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.662515857Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9615d00-5e22-4d62-98f0-39e33c845837 name=/runtime.v1.RuntimeService/Version
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.662595926Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9615d00-5e22-4d62-98f0-39e33c845837 name=/runtime.v1.RuntimeService/Version
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.664502680Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dea47671-ba0f-41dd-8e49-b2d18da47d91 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.665873739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382531665846802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571196,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dea47671-ba0f-41dd-8e49-b2d18da47d91 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.666775169Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=196796b1-1ce3-4901-8fba-697b220e10f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.666880159Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=196796b1-1ce3-4901-8fba-697b220e10f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.667333100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00c6d745c3b5a950a7e3fbd43c2b7cfac54fea1278eba98b4dc91d0c78dc22af,PodSandboxId:56736db040b57431c2733303cf35ebe8f1fc747d2ab54ad0fa8f6dac00f4ba5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726382521302371284,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03db1b25-54f4-4882-85e5-a3edf2b37fd6,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f95d4b1acd0cfe082283caceb9b8c48fbcd59429c53c6a34e09fc96e2f7de2b,PodSandboxId:a28d53a27312fab68ef69af2a3d58223400e1627329d399ac69072362ebe172f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726382475938170869,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-37b863f6-d527-401f-89ba-956f4262c0c9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: cb7482c3-cc73-43ad-a8ef-c85a59f69fd4,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe7d5983063737dc6ef4772587346cb587515ab48cc7f9ff994f583c39407df,PodSandboxId:406f21cdf3d32f1b3a519b8e3845dbfe73127d84e2af02190caee396d0933630,Metadata:&ContainerMetadata{Name:gadget,Attempt:6,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:195d612ae7722fdfec0d582d74fde7db062c1655b60737ceedb14cd627d0d601,State:CONTAINER_EXITED,CreatedAt:1726382294067558076,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c49qm,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 39093114-74ee-4ef8-895c-6694ca3debde,},Annotations:map[string]string{io.kubernetes.container.ha
sh: f1a4d1ab,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73,PodSandboxId:4805b7ff0a6b13511a7dffc70f7fc5eabcad01548e746b45625d996cf9586d5f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726381968716024437,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-g2rmd,io.kubernetes.pod.namespace: gcp-auth,io.kubern
etes.pod.uid: c6416c0e-a192-4454-a335-5c49f36ea19b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b40c9a7366b024e571602d70abd817b4a21a408312f385d0e1574646576b528,PodSandboxId:412309d7483616884f103d2d26329e6fe69f68ffd44d25bfdec50e62cade888c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726381961930852416,Labels:map[string]string{io.kubernetes.container.name: controll
er,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-gcb5h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 14d82e54-1bb1-43c4-8e4d-d47f81096940,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:f89afc4680145c4669218863eb9115877d1c2a0299b1adad8a304633834b036c,PodSandboxId:8489a665b46d5be7194ece239a3e351b4db93e93d45e4be66f6493e37801900f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948279506841,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dd66v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e5a46bbd-02be-4c1f-aebb-00b53cf4c067,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20da8da7f0f5d980cd277a4df22df38e5e008aec108fbe15b44bf3378658b2a8,PodSandboxId:b5070610beb19bb8e2306348bcc578fc4045be505e52b56b7a20975f6dab4f8b,Metadata:&ContainerMetadata{Name:c
reate,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948158370782,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9mn4k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1086452a-e1cb-4387-bec2-242bcb5c68dc,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e762ef5d36b86f59e2d6a45c0c177b5566d12ec93c6b51499fcff97bb652b874,PodSandboxId:5465db13b3322a19437892b4a03612b20a842b3e0d1583d21bda911dad3ff
0b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726381907780581831,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2pshh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0443fc45-c95c-4fab-9dfe-a1b598ac6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:45670c34a0e6710c7a612c9ed83b68dbe0411ef0f267f64c17fb06f947b4c70e,PodSandboxId:0af78ecd18a259fa34c06eceb73560a45e814a32f80e6309aa3b0d892575d940,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726381895017166207,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1fa65c-7021-4ddf-a816-9f840f28af7d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2,PodSandboxId:14b4ae1ab9f1b40837daf07fd350cd8061ba56ec89554f5df3bf853ac9b4cc99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726381884397299457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2fb433-e07a-4c6e-8438-67625e0215a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e,PodSandboxId:b19df699e240af6c1f7248d381f4f7e7d3d3dabb3197c6c7f926b2633c740eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726381882734885829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d42kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df259178-5edc-4af0-97ba-206daeab8c29,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\
":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c,PodSandboxId:3090e56371ab7a7e97c0234d1b573ad0d6e27181cd3cda91d653a71adeffcb6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726381879932666993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldpsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b364d0-170c-491f-a76a-1a9aac8268d1,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da,PodSandboxId:d1a7384c192cbd7bd1bb33d7c4dce95c22863640a4af9f472c6eed350f797cb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726381867165201460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fabc84c41d45bf0e50c614cf9d14b6d5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2,PodSandboxId:c91b2b79714717505cb7b61f0859fccc03a24fdaf9deba663b2fffc57f8ca95b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726381867173545366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5080c7122a700b4220ec74fbddc5b38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io
.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a,PodSandboxId:ddbb5486a2f5fb850ea89d0599da3f76bb5f29363310953da92d44468b188272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726381867159171775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3109ca0208e4bc37d3c2b041acc81270,},Annotations:map[string]string{io.kubernetes.container.hash: d
1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070,PodSandboxId:801081b18db2cd25819255b4e10919986e6dae4caee2ad1d6838680996adcaf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726381867152114781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1518c4b384a8cdb6b825f3767bc485,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=196796b1-1ce3-4901-8fba-697b220e10f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.705575425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0492170f-a289-4bc2-b99a-a87cad5fa20a name=/runtime.v1.RuntimeService/Version
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.705668413Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0492170f-a289-4bc2-b99a-a87cad5fa20a name=/runtime.v1.RuntimeService/Version
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.707031263Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b39c9d3f-3d68-4043-a0d7-3273f6ce849d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.708336537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382531708306272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571196,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b39c9d3f-3d68-4043-a0d7-3273f6ce849d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.708822382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70485ee7-519a-4f5c-b5fb-36059b75c7be name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.708881326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70485ee7-519a-4f5c-b5fb-36059b75c7be name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:42:11 addons-368929 crio[662]: time="2024-09-15 06:42:11.709932931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00c6d745c3b5a950a7e3fbd43c2b7cfac54fea1278eba98b4dc91d0c78dc22af,PodSandboxId:56736db040b57431c2733303cf35ebe8f1fc747d2ab54ad0fa8f6dac00f4ba5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726382521302371284,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03db1b25-54f4-4882-85e5-a3edf2b37fd6,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f95d4b1acd0cfe082283caceb9b8c48fbcd59429c53c6a34e09fc96e2f7de2b,PodSandboxId:a28d53a27312fab68ef69af2a3d58223400e1627329d399ac69072362ebe172f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726382475938170869,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-37b863f6-d527-401f-89ba-956f4262c0c9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: cb7482c3-cc73-43ad-a8ef-c85a59f69fd4,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe7d5983063737dc6ef4772587346cb587515ab48cc7f9ff994f583c39407df,PodSandboxId:406f21cdf3d32f1b3a519b8e3845dbfe73127d84e2af02190caee396d0933630,Metadata:&ContainerMetadata{Name:gadget,Attempt:6,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:195d612ae7722fdfec0d582d74fde7db062c1655b60737ceedb14cd627d0d601,State:CONTAINER_EXITED,CreatedAt:1726382294067558076,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c49qm,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 39093114-74ee-4ef8-895c-6694ca3debde,},Annotations:map[string]string{io.kubernetes.container.ha
sh: f1a4d1ab,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73,PodSandboxId:4805b7ff0a6b13511a7dffc70f7fc5eabcad01548e746b45625d996cf9586d5f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726381968716024437,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-g2rmd,io.kubernetes.pod.namespace: gcp-auth,io.kubern
etes.pod.uid: c6416c0e-a192-4454-a335-5c49f36ea19b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b40c9a7366b024e571602d70abd817b4a21a408312f385d0e1574646576b528,PodSandboxId:412309d7483616884f103d2d26329e6fe69f68ffd44d25bfdec50e62cade888c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726381961930852416,Labels:map[string]string{io.kubernetes.container.name: controll
er,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-gcb5h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 14d82e54-1bb1-43c4-8e4d-d47f81096940,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:f89afc4680145c4669218863eb9115877d1c2a0299b1adad8a304633834b036c,PodSandboxId:8489a665b46d5be7194ece239a3e351b4db93e93d45e4be66f6493e37801900f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948279506841,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dd66v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e5a46bbd-02be-4c1f-aebb-00b53cf4c067,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20da8da7f0f5d980cd277a4df22df38e5e008aec108fbe15b44bf3378658b2a8,PodSandboxId:b5070610beb19bb8e2306348bcc578fc4045be505e52b56b7a20975f6dab4f8b,Metadata:&ContainerMetadata{Name:c
reate,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948158370782,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9mn4k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1086452a-e1cb-4387-bec2-242bcb5c68dc,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e762ef5d36b86f59e2d6a45c0c177b5566d12ec93c6b51499fcff97bb652b874,PodSandboxId:5465db13b3322a19437892b4a03612b20a842b3e0d1583d21bda911dad3ff
0b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726381907780581831,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2pshh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0443fc45-c95c-4fab-9dfe-a1b598ac6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:45670c34a0e6710c7a612c9ed83b68dbe0411ef0f267f64c17fb06f947b4c70e,PodSandboxId:0af78ecd18a259fa34c06eceb73560a45e814a32f80e6309aa3b0d892575d940,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726381895017166207,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1fa65c-7021-4ddf-a816-9f840f28af7d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2,PodSandboxId:14b4ae1ab9f1b40837daf07fd350cd8061ba56ec89554f5df3bf853ac9b4cc99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726381884397299457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2fb433-e07a-4c6e-8438-67625e0215a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e,PodSandboxId:b19df699e240af6c1f7248d381f4f7e7d3d3dabb3197c6c7f926b2633c740eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726381882734885829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d42kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df259178-5edc-4af0-97ba-206daeab8c29,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\
":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c,PodSandboxId:3090e56371ab7a7e97c0234d1b573ad0d6e27181cd3cda91d653a71adeffcb6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726381879932666993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldpsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b364d0-170c-491f-a76a-1a9aac8268d1,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da,PodSandboxId:d1a7384c192cbd7bd1bb33d7c4dce95c22863640a4af9f472c6eed350f797cb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726381867165201460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fabc84c41d45bf0e50c614cf9d14b6d5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2,PodSandboxId:c91b2b79714717505cb7b61f0859fccc03a24fdaf9deba663b2fffc57f8ca95b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726381867173545366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5080c7122a700b4220ec74fbddc5b38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io
.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a,PodSandboxId:ddbb5486a2f5fb850ea89d0599da3f76bb5f29363310953da92d44468b188272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726381867159171775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3109ca0208e4bc37d3c2b041acc81270,},Annotations:map[string]string{io.kubernetes.container.hash: d
1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070,PodSandboxId:801081b18db2cd25819255b4e10919986e6dae4caee2ad1d6838680996adcaf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726381867152114781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1518c4b384a8cdb6b825f3767bc485,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70485ee7-519a-4f5c-b5fb-36059b75c7be name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	00c6d745c3b5a       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              10 seconds ago      Running             nginx                     0                   56736db040b57       nginx
	1f95d4b1acd0c       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             55 seconds ago      Exited              helper-pod                0                   a28d53a27312f       helper-pod-delete-pvc-37b863f6-d527-401f-89ba-956f4262c0c9
	1fe7d59830637       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            3 minutes ago       Exited              gadget                    6                   406f21cdf3d32       gadget-c49qm
	af20c2eee64f4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                  0                   4805b7ff0a6b1       gcp-auth-89d5ffd79-g2rmd
	8b40c9a7366b0       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                0                   412309d748361       ingress-nginx-controller-bc57996ff-gcb5h
	f89afc4680145       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              patch                     0                   8489a665b46d5       ingress-nginx-admission-patch-dd66v
	20da8da7f0f5d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              create                    0                   b5070610beb19       ingress-nginx-admission-create-9mn4k
	e762ef5d36b86       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        10 minutes ago      Running             metrics-server            0                   5465db13b3322       metrics-server-84c5f94fbc-2pshh
	45670c34a0e67       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns      0                   0af78ecd18a25       kube-ingress-dns-minikube
	522296a807289       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner       0                   14b4ae1ab9f1b       storage-provisioner
	0eaf92b0ac4cf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago      Running             coredns                   0                   b19df699e240a       coredns-7c65d6cfc9-d42kz
	f44a755ad6406       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             10 minutes ago      Running             kube-proxy                0                   3090e56371ab7       kube-proxy-ldpsk
	2d2c642ca90bf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             11 minutes ago      Running             etcd                      0                   c91b2b7971471       etcd-addons-368929
	5278a91f04afe       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             11 minutes ago      Running             kube-scheduler            0                   d1a7384c192cb       kube-scheduler-addons-368929
	66eb2bd2d4313       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             11 minutes ago      Running             kube-controller-manager   0                   ddbb5486a2f5f       kube-controller-manager-addons-368929
	0f00b1281db41       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             11 minutes ago      Running             kube-apiserver            0                   801081b18db2c       kube-apiserver-addons-368929
	
	
	==> coredns [0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e] <==
	[INFO] 10.244.0.7:60872 - 2697 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000372412s
	[INFO] 10.244.0.7:54481 - 63880 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000200669s
	[INFO] 10.244.0.7:54481 - 36493 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00014208s
	[INFO] 10.244.0.7:58760 - 443 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090787s
	[INFO] 10.244.0.7:58760 - 23481 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088177s
	[INFO] 10.244.0.7:48535 - 47705 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000271192s
	[INFO] 10.244.0.7:48535 - 54567 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083036s
	[INFO] 10.244.0.7:42330 - 4731 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138141s
	[INFO] 10.244.0.7:42330 - 6517 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000092133s
	[INFO] 10.244.0.7:47964 - 26953 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000283443s
	[INFO] 10.244.0.7:47964 - 19270 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142949s
	[INFO] 10.244.0.7:49955 - 21487 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137257s
	[INFO] 10.244.0.7:49955 - 61676 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095206s
	[INFO] 10.244.0.7:38355 - 23195 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000252309s
	[INFO] 10.244.0.7:38355 - 62100 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060261s
	[INFO] 10.244.0.7:43701 - 7554 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000161529s
	[INFO] 10.244.0.7:43701 - 65420 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000048462s
	[INFO] 10.244.0.22:50845 - 48293 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000496971s
	[INFO] 10.244.0.22:56694 - 7666 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000106022s
	[INFO] 10.244.0.22:53136 - 48746 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122296s
	[INFO] 10.244.0.22:43399 - 31030 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149247s
	[INFO] 10.244.0.22:48872 - 36794 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000141706s
	[INFO] 10.244.0.22:38135 - 52360 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121673s
	[INFO] 10.244.0.22:39775 - 36027 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000834966s
	[INFO] 10.244.0.22:40967 - 58177 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.00127761s
	
	
	==> describe nodes <==
	Name:               addons-368929
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-368929
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=addons-368929
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_31_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-368929
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:31:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-368929
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:42:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:41:54 +0000   Sun, 15 Sep 2024 06:31:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:41:54 +0000   Sun, 15 Sep 2024 06:31:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:41:54 +0000   Sun, 15 Sep 2024 06:31:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:41:54 +0000   Sun, 15 Sep 2024 06:31:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    addons-368929
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6b3f2f71dbb42e29461dbb3bd421d93
	  System UUID:                a6b3f2f7-1dbb-42e2-9461-dbb3bd421d93
	  Boot ID:                    da80a0da-5697-4701-b6a4-39271e495e6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  gadget                      gadget-c49qm                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  gcp-auth                    gcp-auth-89d5ffd79-g2rmd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-gcb5h    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-d42kz                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-368929                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-368929                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-368929       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-ldpsk                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-368929                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-2pshh             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-368929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-368929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-368929 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-368929 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-368929 event: Registered Node addons-368929 in Controller
	
	
	==> dmesg <==
	[  +5.044480] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.711472] kauditd_printk_skb: 166 callbacks suppressed
	[  +6.450745] kauditd_printk_skb: 66 callbacks suppressed
	[ +17.481041] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.378291] kauditd_printk_skb: 32 callbacks suppressed
	[Sep15 06:32] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.107181] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.597636] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.061750] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.516101] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.544554] kauditd_printk_skb: 47 callbacks suppressed
	[Sep15 06:34] kauditd_printk_skb: 28 callbacks suppressed
	[Sep15 06:35] kauditd_printk_skb: 28 callbacks suppressed
	[Sep15 06:38] kauditd_printk_skb: 28 callbacks suppressed
	[Sep15 06:40] kauditd_printk_skb: 28 callbacks suppressed
	[Sep15 06:41] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.724123] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.326898] kauditd_printk_skb: 10 callbacks suppressed
	[  +8.150758] kauditd_printk_skb: 46 callbacks suppressed
	[  +8.170813] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.523089] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.886920] kauditd_printk_skb: 23 callbacks suppressed
	[  +8.541296] kauditd_printk_skb: 33 callbacks suppressed
	[Sep15 06:42] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.212644] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2] <==
	{"level":"warn","ts":"2024-09-15T06:32:38.969879Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-15T06:32:38.614079Z","time spent":"355.795327ms","remote":"127.0.0.1:38656","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true "}
	{"level":"warn","ts":"2024-09-15T06:32:38.969972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.971025ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:32:38.969993Z","caller":"traceutil/trace.go:171","msg":"trace[1761732918] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1081; }","duration":"272.991379ms","start":"2024-09-15T06:32:38.696997Z","end":"2024-09-15T06:32:38.969988Z","steps":["trace[1761732918] 'agreement among raft nodes before linearized reading'  (duration: 272.960931ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:32:38.970074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.201553ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:32:38.970093Z","caller":"traceutil/trace.go:171","msg":"trace[235109500] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1081; }","duration":"192.218119ms","start":"2024-09-15T06:32:38.777867Z","end":"2024-09-15T06:32:38.970085Z","steps":["trace[235109500] 'agreement among raft nodes before linearized reading'  (duration: 192.18886ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:32:41.779440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.051742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:32:41.782084Z","caller":"traceutil/trace.go:171","msg":"trace[1975915837] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1088; }","duration":"281.712677ms","start":"2024-09-15T06:32:41.500363Z","end":"2024-09-15T06:32:41.782076Z","steps":["trace[1975915837] 'range keys from in-memory index tree'  (duration: 279.004355ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:32:41.781563Z","caller":"traceutil/trace.go:171","msg":"trace[25339734] linearizableReadLoop","detail":"{readStateIndex:1122; appliedIndex:1121; }","duration":"133.379421ms","start":"2024-09-15T06:32:41.648165Z","end":"2024-09-15T06:32:41.781545Z","steps":["trace[25339734] 'read index received'  (duration: 126.703432ms)","trace[25339734] 'applied index is now lower than readState.Index'  (duration: 6.675186ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:32:41.781804Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.625248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-09-15T06:32:41.781933Z","caller":"traceutil/trace.go:171","msg":"trace[1390778342] transaction","detail":"{read_only:false; response_revision:1089; number_of_response:1; }","duration":"155.371511ms","start":"2024-09-15T06:32:41.626549Z","end":"2024-09-15T06:32:41.781921Z","steps":["trace[1390778342] 'process raft request'  (duration: 148.372455ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:32:41.783606Z","caller":"traceutil/trace.go:171","msg":"trace[325349942] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1089; }","duration":"135.434363ms","start":"2024-09-15T06:32:41.648161Z","end":"2024-09-15T06:32:41.783595Z","steps":["trace[325349942] 'agreement among raft nodes before linearized reading'  (duration: 133.430374ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:32:41.783629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.612501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:32:41.783886Z","caller":"traceutil/trace.go:171","msg":"trace[808376363] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1089; }","duration":"100.870711ms","start":"2024-09-15T06:32:41.683006Z","end":"2024-09-15T06:32:41.783876Z","steps":["trace[808376363] 'agreement among raft nodes before linearized reading'  (duration: 100.588865ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:32:41.786261Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.155485ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.212\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-09-15T06:32:41.787603Z","caller":"traceutil/trace.go:171","msg":"trace[32333181] range","detail":"{range_begin:/registry/masterleases/192.168.39.212; range_end:; response_count:1; response_revision:1089; }","duration":"104.495783ms","start":"2024-09-15T06:32:41.683094Z","end":"2024-09-15T06:32:41.787590Z","steps":["trace[32333181] 'agreement among raft nodes before linearized reading'  (duration: 103.092831ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:32:53.500272Z","caller":"traceutil/trace.go:171","msg":"trace[423926321] linearizableReadLoop","detail":"{readStateIndex:1189; appliedIndex:1188; }","duration":"222.338479ms","start":"2024-09-15T06:32:53.277918Z","end":"2024-09-15T06:32:53.500256Z","steps":["trace[423926321] 'read index received'  (duration: 222.103394ms)","trace[423926321] 'applied index is now lower than readState.Index'  (duration: 234.479µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:32:53.500508Z","caller":"traceutil/trace.go:171","msg":"trace[865342865] transaction","detail":"{read_only:false; response_revision:1153; number_of_response:1; }","duration":"458.921949ms","start":"2024-09-15T06:32:53.041572Z","end":"2024-09-15T06:32:53.500494Z","steps":["trace[865342865] 'process raft request'  (duration: 458.504383ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:32:53.500612Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-15T06:32:53.041556Z","time spent":"459.003891ms","remote":"127.0.0.1:38690","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1144 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-15T06:32:53.500512Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.59145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:32:53.500848Z","caller":"traceutil/trace.go:171","msg":"trace[2102283515] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1153; }","duration":"222.946871ms","start":"2024-09-15T06:32:53.277893Z","end":"2024-09-15T06:32:53.500839Z","steps":["trace[2102283515] 'agreement among raft nodes before linearized reading'  (duration: 222.546912ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:41:08.543557Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1522}
	{"level":"info","ts":"2024-09-15T06:41:08.573071Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1522,"took":"28.970485ms","hash":4115302871,"current-db-size-bytes":6864896,"current-db-size":"6.9 MB","current-db-size-in-use-bytes":3567616,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-15T06:41:08.573176Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4115302871,"revision":1522,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-15T06:41:20.310795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.008412ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:41:20.310906Z","caller":"traceutil/trace.go:171","msg":"trace[1098128179] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2119; }","duration":"220.206928ms","start":"2024-09-15T06:41:20.090684Z","end":"2024-09-15T06:41:20.310891Z","steps":["trace[1098128179] 'range keys from in-memory index tree'  (duration: 219.905648ms)"],"step_count":1}
	
	
	==> gcp-auth [af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73] <==
	2024/09/15 06:32:56 Ready to write response ...
	2024/09/15 06:32:56 Ready to marshal response ...
	2024/09/15 06:32:56 Ready to write response ...
	2024/09/15 06:40:59 Ready to marshal response ...
	2024/09/15 06:40:59 Ready to write response ...
	2024/09/15 06:40:59 Ready to marshal response ...
	2024/09/15 06:40:59 Ready to write response ...
	2024/09/15 06:41:07 Ready to marshal response ...
	2024/09/15 06:41:07 Ready to write response ...
	2024/09/15 06:41:09 Ready to marshal response ...
	2024/09/15 06:41:09 Ready to write response ...
	2024/09/15 06:41:12 Ready to marshal response ...
	2024/09/15 06:41:12 Ready to write response ...
	2024/09/15 06:41:17 Ready to marshal response ...
	2024/09/15 06:41:17 Ready to write response ...
	2024/09/15 06:41:17 Ready to marshal response ...
	2024/09/15 06:41:17 Ready to write response ...
	2024/09/15 06:41:17 Ready to marshal response ...
	2024/09/15 06:41:17 Ready to write response ...
	2024/09/15 06:41:40 Ready to marshal response ...
	2024/09/15 06:41:40 Ready to write response ...
	2024/09/15 06:41:56 Ready to marshal response ...
	2024/09/15 06:41:56 Ready to write response ...
	2024/09/15 06:42:01 Ready to marshal response ...
	2024/09/15 06:42:01 Ready to write response ...
	
	
	==> kernel <==
	 06:42:12 up 11 min,  0 users,  load average: 1.10, 0.65, 0.47
	Linux addons-368929 5.10.207 #1 SMP Sun Sep 15 04:48:27 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070] <==
	I0915 06:32:58.850331       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0915 06:32:58.856050       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0915 06:41:17.313620       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.178.105"}
	I0915 06:41:22.185630       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0915 06:41:28.694030       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:37.562228       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:38.571198       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:39.581372       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:40.597002       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:41.607613       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:42.617845       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:43.624836       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0915 06:41:55.500605       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:55.500667       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:55.520225       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:55.520373       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:55.561239       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:55.561354       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:55.644936       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:55.645049       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0915 06:41:56.582005       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0915 06:41:56.645832       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0915 06:41:56.728173       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	W0915 06:41:56.736990       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0915 06:41:56.920189       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.144.22"}
	
	
	==> kube-controller-manager [66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a] <==
	E0915 06:41:56.647783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0915 06:41:56.740303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:41:57.614407       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:41:57.614500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:41:57.672676       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:41:57.672860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:41:57.989561       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:41:57.989684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:41:59.858974       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:41:59.859053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:42:00.482971       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:42:00.483004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:42:01.166459       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:42:01.166670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:42:01.422380       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W0915 06:42:03.837990       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:42:03.838049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:42:05.533488       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:42:05.533917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:42:05.690629       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:42:05.690800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:42:06.783113       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="11.632µs"
	I0915 06:42:10.506183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.469µs"
	W0915 06:42:10.672941       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:42:10.673069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 06:31:21.923821       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 06:31:22.107135       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.212"]
	E0915 06:31:22.107232       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:31:22.469316       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 06:31:22.469382       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 06:31:22.469406       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:31:22.502604       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:31:22.502994       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:31:22.503027       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:31:22.513405       1 config.go:199] "Starting service config controller"
	I0915 06:31:22.517572       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:31:22.517677       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:31:22.517749       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:31:22.524073       1 config.go:328] "Starting node config controller"
	I0915 06:31:22.524163       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:31:22.617837       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:31:22.617902       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:31:22.624325       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da] <==
	W0915 06:31:10.099747       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 06:31:10.099810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:10.971911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 06:31:10.972055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:10.992635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:31:10.992769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.044975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 06:31:11.045071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.099337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 06:31:11.099564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.127086       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 06:31:11.127389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.176096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:31:11.176240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.192815       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:31:11.193073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.242830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:31:11.242950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.291677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:31:11.291812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.319296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 06:31:11.319464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.333004       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:31:11.333054       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0915 06:31:13.689226       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 06:42:07 addons-368929 kubelet[1209]: I0915 06:42:07.179880    1209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6012a392-8d4a-4d69-a877-31fa7f992089-kube-api-access-rzpt4" (OuterVolumeSpecName: "kube-api-access-rzpt4") pod "6012a392-8d4a-4d69-a877-31fa7f992089" (UID: "6012a392-8d4a-4d69-a877-31fa7f992089"). InnerVolumeSpecName "kube-api-access-rzpt4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:42:07 addons-368929 kubelet[1209]: I0915 06:42:07.277475    1209 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rzpt4\" (UniqueName: \"kubernetes.io/projected/6012a392-8d4a-4d69-a877-31fa7f992089-kube-api-access-rzpt4\") on node \"addons-368929\" DevicePath \"\""
	Sep 15 06:42:08 addons-368929 kubelet[1209]: I0915 06:42:08.530670    1209 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6012a392-8d4a-4d69-a877-31fa7f992089" path="/var/lib/kubelet/pods/6012a392-8d4a-4d69-a877-31fa7f992089/volumes"
	Sep 15 06:42:10 addons-368929 kubelet[1209]: I0915 06:42:10.164265    1209 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9df6123860a2b22ac455ebe431428007a4d85a6db2334549a1ac97fbb67d610"
	Sep 15 06:42:10 addons-368929 kubelet[1209]: E0915 06:42:10.529041    1209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c8076028-6672-48b6-8085-14b06a0a0268"
	Sep 15 06:42:10 addons-368929 kubelet[1209]: I0915 06:42:10.704133    1209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snxft\" (UniqueName: \"kubernetes.io/projected/9e7d762d-41a0-460e-ae5b-e3dc462476ed-kube-api-access-snxft\") pod \"9e7d762d-41a0-460e-ae5b-e3dc462476ed\" (UID: \"9e7d762d-41a0-460e-ae5b-e3dc462476ed\") "
	Sep 15 06:42:10 addons-368929 kubelet[1209]: I0915 06:42:10.704201    1209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9e7d762d-41a0-460e-ae5b-e3dc462476ed-gcp-creds\") pod \"9e7d762d-41a0-460e-ae5b-e3dc462476ed\" (UID: \"9e7d762d-41a0-460e-ae5b-e3dc462476ed\") "
	Sep 15 06:42:10 addons-368929 kubelet[1209]: I0915 06:42:10.704308    1209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e7d762d-41a0-460e-ae5b-e3dc462476ed-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "9e7d762d-41a0-460e-ae5b-e3dc462476ed" (UID: "9e7d762d-41a0-460e-ae5b-e3dc462476ed"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 15 06:42:10 addons-368929 kubelet[1209]: I0915 06:42:10.714321    1209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e7d762d-41a0-460e-ae5b-e3dc462476ed-kube-api-access-snxft" (OuterVolumeSpecName: "kube-api-access-snxft") pod "9e7d762d-41a0-460e-ae5b-e3dc462476ed" (UID: "9e7d762d-41a0-460e-ae5b-e3dc462476ed"). InnerVolumeSpecName "kube-api-access-snxft". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:42:10 addons-368929 kubelet[1209]: I0915 06:42:10.804857    1209 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-snxft\" (UniqueName: \"kubernetes.io/projected/9e7d762d-41a0-460e-ae5b-e3dc462476ed-kube-api-access-snxft\") on node \"addons-368929\" DevicePath \"\""
	Sep 15 06:42:10 addons-368929 kubelet[1209]: I0915 06:42:10.804883    1209 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9e7d762d-41a0-460e-ae5b-e3dc462476ed-gcp-creds\") on node \"addons-368929\" DevicePath \"\""
	Sep 15 06:42:10 addons-368929 kubelet[1209]: I0915 06:42:10.905902    1209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54dqj\" (UniqueName: \"kubernetes.io/projected/29e66421-b96f-416d-b126-9c3b0d11bc7f-kube-api-access-54dqj\") pod \"29e66421-b96f-416d-b126-9c3b0d11bc7f\" (UID: \"29e66421-b96f-416d-b126-9c3b0d11bc7f\") "
	Sep 15 06:42:10 addons-368929 kubelet[1209]: I0915 06:42:10.911517    1209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e66421-b96f-416d-b126-9c3b0d11bc7f-kube-api-access-54dqj" (OuterVolumeSpecName: "kube-api-access-54dqj") pod "29e66421-b96f-416d-b126-9c3b0d11bc7f" (UID: "29e66421-b96f-416d-b126-9c3b0d11bc7f"). InnerVolumeSpecName "kube-api-access-54dqj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:42:11 addons-368929 kubelet[1209]: I0915 06:42:11.006467    1209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dmwf\" (UniqueName: \"kubernetes.io/projected/cb62ce46-b1f1-4fef-8ada-8ff4f0dc35ce-kube-api-access-7dmwf\") pod \"cb62ce46-b1f1-4fef-8ada-8ff4f0dc35ce\" (UID: \"cb62ce46-b1f1-4fef-8ada-8ff4f0dc35ce\") "
	Sep 15 06:42:11 addons-368929 kubelet[1209]: I0915 06:42:11.006558    1209 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-54dqj\" (UniqueName: \"kubernetes.io/projected/29e66421-b96f-416d-b126-9c3b0d11bc7f-kube-api-access-54dqj\") on node \"addons-368929\" DevicePath \"\""
	Sep 15 06:42:11 addons-368929 kubelet[1209]: I0915 06:42:11.008899    1209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb62ce46-b1f1-4fef-8ada-8ff4f0dc35ce-kube-api-access-7dmwf" (OuterVolumeSpecName: "kube-api-access-7dmwf") pod "cb62ce46-b1f1-4fef-8ada-8ff4f0dc35ce" (UID: "cb62ce46-b1f1-4fef-8ada-8ff4f0dc35ce"). InnerVolumeSpecName "kube-api-access-7dmwf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:42:11 addons-368929 kubelet[1209]: I0915 06:42:11.107813    1209 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7dmwf\" (UniqueName: \"kubernetes.io/projected/cb62ce46-b1f1-4fef-8ada-8ff4f0dc35ce-kube-api-access-7dmwf\") on node \"addons-368929\" DevicePath \"\""
	Sep 15 06:42:11 addons-368929 kubelet[1209]: I0915 06:42:11.179085    1209 scope.go:117] "RemoveContainer" containerID="b60bbe74023ba706b56a4ecf5c9fa5465a889af65e6be7359a10150d540b9a40"
	Sep 15 06:42:11 addons-368929 kubelet[1209]: I0915 06:42:11.276220    1209 scope.go:117] "RemoveContainer" containerID="b60bbe74023ba706b56a4ecf5c9fa5465a889af65e6be7359a10150d540b9a40"
	Sep 15 06:42:11 addons-368929 kubelet[1209]: E0915 06:42:11.280057    1209 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b60bbe74023ba706b56a4ecf5c9fa5465a889af65e6be7359a10150d540b9a40\": container with ID starting with b60bbe74023ba706b56a4ecf5c9fa5465a889af65e6be7359a10150d540b9a40 not found: ID does not exist" containerID="b60bbe74023ba706b56a4ecf5c9fa5465a889af65e6be7359a10150d540b9a40"
	Sep 15 06:42:11 addons-368929 kubelet[1209]: I0915 06:42:11.280129    1209 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b60bbe74023ba706b56a4ecf5c9fa5465a889af65e6be7359a10150d540b9a40"} err="failed to get container status \"b60bbe74023ba706b56a4ecf5c9fa5465a889af65e6be7359a10150d540b9a40\": rpc error: code = NotFound desc = could not find container \"b60bbe74023ba706b56a4ecf5c9fa5465a889af65e6be7359a10150d540b9a40\": container with ID starting with b60bbe74023ba706b56a4ecf5c9fa5465a889af65e6be7359a10150d540b9a40 not found: ID does not exist"
	Sep 15 06:42:11 addons-368929 kubelet[1209]: I0915 06:42:11.280215    1209 scope.go:117] "RemoveContainer" containerID="722afd7a964b1a1b48a487737fa7562350a353c28def8e24b6ac03973ab1bfc8"
	Sep 15 06:42:11 addons-368929 kubelet[1209]: I0915 06:42:11.304448    1209 scope.go:117] "RemoveContainer" containerID="722afd7a964b1a1b48a487737fa7562350a353c28def8e24b6ac03973ab1bfc8"
	Sep 15 06:42:11 addons-368929 kubelet[1209]: E0915 06:42:11.305010    1209 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"722afd7a964b1a1b48a487737fa7562350a353c28def8e24b6ac03973ab1bfc8\": container with ID starting with 722afd7a964b1a1b48a487737fa7562350a353c28def8e24b6ac03973ab1bfc8 not found: ID does not exist" containerID="722afd7a964b1a1b48a487737fa7562350a353c28def8e24b6ac03973ab1bfc8"
	Sep 15 06:42:11 addons-368929 kubelet[1209]: I0915 06:42:11.305038    1209 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722afd7a964b1a1b48a487737fa7562350a353c28def8e24b6ac03973ab1bfc8"} err="failed to get container status \"722afd7a964b1a1b48a487737fa7562350a353c28def8e24b6ac03973ab1bfc8\": rpc error: code = NotFound desc = could not find container \"722afd7a964b1a1b48a487737fa7562350a353c28def8e24b6ac03973ab1bfc8\": container with ID starting with 722afd7a964b1a1b48a487737fa7562350a353c28def8e24b6ac03973ab1bfc8 not found: ID does not exist"
	
	
	==> storage-provisioner [522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2] <==
	I0915 06:31:26.557262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:31:26.648171       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:31:26.648246       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:31:26.724533       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:31:26.725105       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99973a3d-83c9-43fb-b77d-d8ca8d8c9277", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-368929_2b5bab70-53fb-4236-bcbd-c12d04df3962 became leader
	I0915 06:31:26.729461       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-368929_2b5bab70-53fb-4236-bcbd-c12d04df3962!
	I0915 06:31:26.839908       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-368929_2b5bab70-53fb-4236-bcbd-c12d04df3962!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-368929 -n addons-368929
helpers_test.go:261: (dbg) Run:  kubectl --context addons-368929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox gadget-c49qm ingress-nginx-admission-create-9mn4k ingress-nginx-admission-patch-dd66v
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-368929 describe pod busybox gadget-c49qm ingress-nginx-admission-create-9mn4k ingress-nginx-admission-patch-dd66v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-368929 describe pod busybox gadget-c49qm ingress-nginx-admission-create-9mn4k ingress-nginx-admission-patch-dd66v: exit status 1 (67.298786ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-368929/192.168.39.212
	Start Time:       Sun, 15 Sep 2024 06:32:56 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rz99b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rz99b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-368929
	  Normal   Pulling    7m46s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m46s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m46s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m31s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m14s (x20 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "gadget-c49qm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-9mn4k" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dd66v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-368929 describe pod busybox gadget-c49qm ingress-nginx-admission-create-9mn4k ingress-nginx-admission-patch-dd66v: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.25s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (154.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-368929 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-368929 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-368929 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [03db1b25-54f4-4882-85e5-a3edf2b37fd6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [03db1b25-54f4-4882-85e5-a3edf2b37fd6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.00422292s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-368929 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.330686299s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-368929 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.212
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-368929 addons disable ingress-dns --alsologtostderr -v=1: (1.017335841s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-368929 addons disable ingress --alsologtostderr -v=1: (7.70046846s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-368929 -n addons-368929
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-368929 logs -n 25: (1.358313326s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-119130                                                                     | download-only-119130 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| delete  | -p download-only-832723                                                                     | download-only-832723 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| delete  | -p download-only-119130                                                                     | download-only-119130 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-702457 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | binary-mirror-702457                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37011                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-702457                                                                     | binary-mirror-702457 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| addons  | enable dashboard -p                                                                         | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | addons-368929                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | addons-368929                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-368929 --wait=true                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:32 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-368929 ssh cat                                                                       | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | /opt/local-path-provisioner/pvc-37b863f6-d527-401f-89ba-956f4262c0c9_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | addons-368929                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | -p addons-368929                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | -p addons-368929                                                                            |                      |         |         |                     |                     |
	| addons  | addons-368929 addons                                                                        | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-368929 addons                                                                        | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-368929 ssh curl -s                                                                   | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-368929 ip                                                                            | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	|         | addons-368929                                                                               |                      |         |         |                     |                     |
	| ip      | addons-368929 ip                                                                            | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:44 UTC | 15 Sep 24 06:44 UTC |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:44 UTC | 15 Sep 24 06:44 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:44 UTC | 15 Sep 24 06:44 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:30:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:30:34.502587   13942 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:30:34.502678   13942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:30:34.502685   13942 out.go:358] Setting ErrFile to fd 2...
	I0915 06:30:34.502689   13942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:30:34.502874   13942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 06:30:34.503472   13942 out.go:352] Setting JSON to false
	I0915 06:30:34.504273   13942 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":780,"bootTime":1726381054,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:30:34.504369   13942 start.go:139] virtualization: kvm guest
	I0915 06:30:34.507106   13942 out.go:177] * [addons-368929] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 06:30:34.508386   13942 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:30:34.508405   13942 notify.go:220] Checking for updates...
	I0915 06:30:34.511198   13942 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:30:34.512524   13942 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 06:30:34.513658   13942 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 06:30:34.514857   13942 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:30:34.515998   13942 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:30:34.517110   13942 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:30:34.547737   13942 out.go:177] * Using the kvm2 driver based on user configuration
	I0915 06:30:34.548792   13942 start.go:297] selected driver: kvm2
	I0915 06:30:34.548818   13942 start.go:901] validating driver "kvm2" against <nil>
	I0915 06:30:34.548833   13942 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:30:34.549511   13942 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:30:34.549598   13942 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 06:30:34.563630   13942 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 06:30:34.563667   13942 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:30:34.563907   13942 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:30:34.563939   13942 cni.go:84] Creating CNI manager for ""
	I0915 06:30:34.563977   13942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 06:30:34.563985   13942 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 06:30:34.564028   13942 start.go:340] cluster config:
	{Name:addons-368929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:30:34.564113   13942 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:30:34.565784   13942 out.go:177] * Starting "addons-368929" primary control-plane node in "addons-368929" cluster
	I0915 06:30:34.566926   13942 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:30:34.566954   13942 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 06:30:34.566963   13942 cache.go:56] Caching tarball of preloaded images
	I0915 06:30:34.567049   13942 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 06:30:34.567062   13942 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 06:30:34.567364   13942 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/config.json ...
	I0915 06:30:34.567385   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/config.json: {Name:mk52f636c4ede8c4dfee1d713e4fd97fe830cfd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:34.567522   13942 start.go:360] acquireMachinesLock for addons-368929: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 06:30:34.567577   13942 start.go:364] duration metric: took 39.328µs to acquireMachinesLock for "addons-368929"
	I0915 06:30:34.567599   13942 start.go:93] Provisioning new machine with config: &{Name:addons-368929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:30:34.567665   13942 start.go:125] createHost starting for "" (driver="kvm2")
	I0915 06:30:34.569232   13942 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0915 06:30:34.569343   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:30:34.569382   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:30:34.583188   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I0915 06:30:34.583668   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:30:34.584246   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:30:34.584267   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:30:34.584599   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:30:34.584752   13942 main.go:141] libmachine: (addons-368929) Calling .GetMachineName
	I0915 06:30:34.584884   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:34.585061   13942 start.go:159] libmachine.API.Create for "addons-368929" (driver="kvm2")
	I0915 06:30:34.585092   13942 client.go:168] LocalClient.Create starting
	I0915 06:30:34.585134   13942 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem
	I0915 06:30:34.864190   13942 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem
	I0915 06:30:35.049893   13942 main.go:141] libmachine: Running pre-create checks...
	I0915 06:30:35.049914   13942 main.go:141] libmachine: (addons-368929) Calling .PreCreateCheck
	I0915 06:30:35.050423   13942 main.go:141] libmachine: (addons-368929) Calling .GetConfigRaw
	I0915 06:30:35.050849   13942 main.go:141] libmachine: Creating machine...
	I0915 06:30:35.050864   13942 main.go:141] libmachine: (addons-368929) Calling .Create
	I0915 06:30:35.051026   13942 main.go:141] libmachine: (addons-368929) Creating KVM machine...
	I0915 06:30:35.052240   13942 main.go:141] libmachine: (addons-368929) DBG | found existing default KVM network
	I0915 06:30:35.052972   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.052837   13964 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I0915 06:30:35.053018   13942 main.go:141] libmachine: (addons-368929) DBG | created network xml: 
	I0915 06:30:35.053051   13942 main.go:141] libmachine: (addons-368929) DBG | <network>
	I0915 06:30:35.053059   13942 main.go:141] libmachine: (addons-368929) DBG |   <name>mk-addons-368929</name>
	I0915 06:30:35.053064   13942 main.go:141] libmachine: (addons-368929) DBG |   <dns enable='no'/>
	I0915 06:30:35.053070   13942 main.go:141] libmachine: (addons-368929) DBG |   
	I0915 06:30:35.053076   13942 main.go:141] libmachine: (addons-368929) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0915 06:30:35.053085   13942 main.go:141] libmachine: (addons-368929) DBG |     <dhcp>
	I0915 06:30:35.053090   13942 main.go:141] libmachine: (addons-368929) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0915 06:30:35.053095   13942 main.go:141] libmachine: (addons-368929) DBG |     </dhcp>
	I0915 06:30:35.053099   13942 main.go:141] libmachine: (addons-368929) DBG |   </ip>
	I0915 06:30:35.053104   13942 main.go:141] libmachine: (addons-368929) DBG |   
	I0915 06:30:35.053114   13942 main.go:141] libmachine: (addons-368929) DBG | </network>
	I0915 06:30:35.053144   13942 main.go:141] libmachine: (addons-368929) DBG | 
	I0915 06:30:35.058552   13942 main.go:141] libmachine: (addons-368929) DBG | trying to create private KVM network mk-addons-368929 192.168.39.0/24...
	I0915 06:30:35.121581   13942 main.go:141] libmachine: (addons-368929) Setting up store path in /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929 ...
	I0915 06:30:35.121603   13942 main.go:141] libmachine: (addons-368929) DBG | private KVM network mk-addons-368929 192.168.39.0/24 created
	I0915 06:30:35.121625   13942 main.go:141] libmachine: (addons-368929) Building disk image from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso
	I0915 06:30:35.121656   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.121548   13964 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 06:30:35.121742   13942 main.go:141] libmachine: (addons-368929) Downloading /home/jenkins/minikube-integration/19644-6166/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso...
	I0915 06:30:35.379116   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.378937   13964 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa...
	I0915 06:30:35.512593   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.512453   13964 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/addons-368929.rawdisk...
	I0915 06:30:35.512623   13942 main.go:141] libmachine: (addons-368929) DBG | Writing magic tar header
	I0915 06:30:35.512637   13942 main.go:141] libmachine: (addons-368929) DBG | Writing SSH key tar header
	I0915 06:30:35.512649   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.512598   13964 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929 ...
	I0915 06:30:35.512682   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929
	I0915 06:30:35.512720   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines
	I0915 06:30:35.512748   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 06:30:35.512761   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929 (perms=drwx------)
	I0915 06:30:35.512770   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166
	I0915 06:30:35.512782   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0915 06:30:35.512789   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins
	I0915 06:30:35.512796   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home
	I0915 06:30:35.512802   13942 main.go:141] libmachine: (addons-368929) DBG | Skipping /home - not owner
	I0915 06:30:35.512811   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines (perms=drwxr-xr-x)
	I0915 06:30:35.512824   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube (perms=drwxr-xr-x)
	I0915 06:30:35.512862   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166 (perms=drwxrwxr-x)
	I0915 06:30:35.512879   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0915 06:30:35.512887   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0915 06:30:35.512892   13942 main.go:141] libmachine: (addons-368929) Creating domain...
	I0915 06:30:35.513950   13942 main.go:141] libmachine: (addons-368929) define libvirt domain using xml: 
	I0915 06:30:35.513976   13942 main.go:141] libmachine: (addons-368929) <domain type='kvm'>
	I0915 06:30:35.513987   13942 main.go:141] libmachine: (addons-368929)   <name>addons-368929</name>
	I0915 06:30:35.513996   13942 main.go:141] libmachine: (addons-368929)   <memory unit='MiB'>4000</memory>
	I0915 06:30:35.514006   13942 main.go:141] libmachine: (addons-368929)   <vcpu>2</vcpu>
	I0915 06:30:35.514012   13942 main.go:141] libmachine: (addons-368929)   <features>
	I0915 06:30:35.514017   13942 main.go:141] libmachine: (addons-368929)     <acpi/>
	I0915 06:30:35.514020   13942 main.go:141] libmachine: (addons-368929)     <apic/>
	I0915 06:30:35.514025   13942 main.go:141] libmachine: (addons-368929)     <pae/>
	I0915 06:30:35.514029   13942 main.go:141] libmachine: (addons-368929)     
	I0915 06:30:35.514034   13942 main.go:141] libmachine: (addons-368929)   </features>
	I0915 06:30:35.514040   13942 main.go:141] libmachine: (addons-368929)   <cpu mode='host-passthrough'>
	I0915 06:30:35.514045   13942 main.go:141] libmachine: (addons-368929)   
	I0915 06:30:35.514052   13942 main.go:141] libmachine: (addons-368929)   </cpu>
	I0915 06:30:35.514057   13942 main.go:141] libmachine: (addons-368929)   <os>
	I0915 06:30:35.514063   13942 main.go:141] libmachine: (addons-368929)     <type>hvm</type>
	I0915 06:30:35.514068   13942 main.go:141] libmachine: (addons-368929)     <boot dev='cdrom'/>
	I0915 06:30:35.514074   13942 main.go:141] libmachine: (addons-368929)     <boot dev='hd'/>
	I0915 06:30:35.514079   13942 main.go:141] libmachine: (addons-368929)     <bootmenu enable='no'/>
	I0915 06:30:35.514087   13942 main.go:141] libmachine: (addons-368929)   </os>
	I0915 06:30:35.514123   13942 main.go:141] libmachine: (addons-368929)   <devices>
	I0915 06:30:35.514143   13942 main.go:141] libmachine: (addons-368929)     <disk type='file' device='cdrom'>
	I0915 06:30:35.514158   13942 main.go:141] libmachine: (addons-368929)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/boot2docker.iso'/>
	I0915 06:30:35.514178   13942 main.go:141] libmachine: (addons-368929)       <target dev='hdc' bus='scsi'/>
	I0915 06:30:35.514196   13942 main.go:141] libmachine: (addons-368929)       <readonly/>
	I0915 06:30:35.514210   13942 main.go:141] libmachine: (addons-368929)     </disk>
	I0915 06:30:35.514224   13942 main.go:141] libmachine: (addons-368929)     <disk type='file' device='disk'>
	I0915 06:30:35.514233   13942 main.go:141] libmachine: (addons-368929)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0915 06:30:35.514247   13942 main.go:141] libmachine: (addons-368929)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/addons-368929.rawdisk'/>
	I0915 06:30:35.514254   13942 main.go:141] libmachine: (addons-368929)       <target dev='hda' bus='virtio'/>
	I0915 06:30:35.514259   13942 main.go:141] libmachine: (addons-368929)     </disk>
	I0915 06:30:35.514272   13942 main.go:141] libmachine: (addons-368929)     <interface type='network'>
	I0915 06:30:35.514279   13942 main.go:141] libmachine: (addons-368929)       <source network='mk-addons-368929'/>
	I0915 06:30:35.514284   13942 main.go:141] libmachine: (addons-368929)       <model type='virtio'/>
	I0915 06:30:35.514291   13942 main.go:141] libmachine: (addons-368929)     </interface>
	I0915 06:30:35.514298   13942 main.go:141] libmachine: (addons-368929)     <interface type='network'>
	I0915 06:30:35.514327   13942 main.go:141] libmachine: (addons-368929)       <source network='default'/>
	I0915 06:30:35.514346   13942 main.go:141] libmachine: (addons-368929)       <model type='virtio'/>
	I0915 06:30:35.514353   13942 main.go:141] libmachine: (addons-368929)     </interface>
	I0915 06:30:35.514363   13942 main.go:141] libmachine: (addons-368929)     <serial type='pty'>
	I0915 06:30:35.514370   13942 main.go:141] libmachine: (addons-368929)       <target port='0'/>
	I0915 06:30:35.514375   13942 main.go:141] libmachine: (addons-368929)     </serial>
	I0915 06:30:35.514382   13942 main.go:141] libmachine: (addons-368929)     <console type='pty'>
	I0915 06:30:35.514401   13942 main.go:141] libmachine: (addons-368929)       <target type='serial' port='0'/>
	I0915 06:30:35.514411   13942 main.go:141] libmachine: (addons-368929)     </console>
	I0915 06:30:35.514423   13942 main.go:141] libmachine: (addons-368929)     <rng model='virtio'>
	I0915 06:30:35.514431   13942 main.go:141] libmachine: (addons-368929)       <backend model='random'>/dev/random</backend>
	I0915 06:30:35.514440   13942 main.go:141] libmachine: (addons-368929)     </rng>
	I0915 06:30:35.514452   13942 main.go:141] libmachine: (addons-368929)     
	I0915 06:30:35.514462   13942 main.go:141] libmachine: (addons-368929)     
	I0915 06:30:35.514471   13942 main.go:141] libmachine: (addons-368929)   </devices>
	I0915 06:30:35.514478   13942 main.go:141] libmachine: (addons-368929) </domain>
	I0915 06:30:35.514493   13942 main.go:141] libmachine: (addons-368929) 
	I0915 06:30:35.519732   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:97:d7:7e in network default
	I0915 06:30:35.520190   13942 main.go:141] libmachine: (addons-368929) Ensuring networks are active...
	I0915 06:30:35.520223   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:35.520835   13942 main.go:141] libmachine: (addons-368929) Ensuring network default is active
	I0915 06:30:35.521094   13942 main.go:141] libmachine: (addons-368929) Ensuring network mk-addons-368929 is active
	I0915 06:30:35.521540   13942 main.go:141] libmachine: (addons-368929) Getting domain xml...
	I0915 06:30:35.522139   13942 main.go:141] libmachine: (addons-368929) Creating domain...
	I0915 06:30:36.911230   13942 main.go:141] libmachine: (addons-368929) Waiting to get IP...
	I0915 06:30:36.912033   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:36.912348   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:36.912367   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:36.912342   13964 retry.go:31] will retry after 305.621927ms: waiting for machine to come up
	I0915 06:30:37.219791   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:37.220118   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:37.220142   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:37.220077   13964 retry.go:31] will retry after 369.163907ms: waiting for machine to come up
	I0915 06:30:37.590495   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:37.590957   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:37.590982   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:37.590911   13964 retry.go:31] will retry after 359.18262ms: waiting for machine to come up
	I0915 06:30:37.951271   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:37.951735   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:37.951766   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:37.951687   13964 retry.go:31] will retry after 431.887952ms: waiting for machine to come up
	I0915 06:30:38.385216   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:38.385618   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:38.385654   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:38.385573   13964 retry.go:31] will retry after 586.296252ms: waiting for machine to come up
	I0915 06:30:38.973375   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:38.973835   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:38.973871   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:38.973742   13964 retry.go:31] will retry after 586.258738ms: waiting for machine to come up
	I0915 06:30:39.561452   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:39.561928   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:39.561949   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:39.561894   13964 retry.go:31] will retry after 904.897765ms: waiting for machine to come up
	I0915 06:30:40.468462   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:40.468857   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:40.468885   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:40.468834   13964 retry.go:31] will retry after 1.465267821s: waiting for machine to come up
	I0915 06:30:41.936456   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:41.936817   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:41.936840   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:41.936771   13964 retry.go:31] will retry after 1.712738986s: waiting for machine to come up
	I0915 06:30:43.651694   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:43.652084   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:43.652108   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:43.652035   13964 retry.go:31] will retry after 2.008845539s: waiting for machine to come up
	I0915 06:30:45.663024   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:45.663547   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:45.663573   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:45.663481   13964 retry.go:31] will retry after 2.586699686s: waiting for machine to come up
	I0915 06:30:48.251434   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:48.251775   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:48.251796   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:48.251742   13964 retry.go:31] will retry after 2.759887359s: waiting for machine to come up
	I0915 06:30:51.013703   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:51.014097   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:51.014135   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:51.014061   13964 retry.go:31] will retry after 4.488920728s: waiting for machine to come up
	I0915 06:30:55.504672   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.505169   13942 main.go:141] libmachine: (addons-368929) Found IP for machine: 192.168.39.212
	I0915 06:30:55.505195   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has current primary IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.505204   13942 main.go:141] libmachine: (addons-368929) Reserving static IP address...
	I0915 06:30:55.505525   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find host DHCP lease matching {name: "addons-368929", mac: "52:54:00:b0:ac:60", ip: "192.168.39.212"} in network mk-addons-368929
	I0915 06:30:55.572968   13942 main.go:141] libmachine: (addons-368929) DBG | Getting to WaitForSSH function...
	I0915 06:30:55.573003   13942 main.go:141] libmachine: (addons-368929) Reserved static IP address: 192.168.39.212
	I0915 06:30:55.573015   13942 main.go:141] libmachine: (addons-368929) Waiting for SSH to be available...
	I0915 06:30:55.575550   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.575899   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:55.575919   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.576162   13942 main.go:141] libmachine: (addons-368929) DBG | Using SSH client type: external
	I0915 06:30:55.576193   13942 main.go:141] libmachine: (addons-368929) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa (-rw-------)
	I0915 06:30:55.576224   13942 main.go:141] libmachine: (addons-368929) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 06:30:55.576241   13942 main.go:141] libmachine: (addons-368929) DBG | About to run SSH command:
	I0915 06:30:55.576256   13942 main.go:141] libmachine: (addons-368929) DBG | exit 0
	I0915 06:30:55.705901   13942 main.go:141] libmachine: (addons-368929) DBG | SSH cmd err, output: <nil>: 
	I0915 06:30:55.706188   13942 main.go:141] libmachine: (addons-368929) KVM machine creation complete!
	I0915 06:30:55.706473   13942 main.go:141] libmachine: (addons-368929) Calling .GetConfigRaw
	I0915 06:30:55.707031   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:55.707200   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:55.707361   13942 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0915 06:30:55.707372   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:30:55.708643   13942 main.go:141] libmachine: Detecting operating system of created instance...
	I0915 06:30:55.708660   13942 main.go:141] libmachine: Waiting for SSH to be available...
	I0915 06:30:55.708667   13942 main.go:141] libmachine: Getting to WaitForSSH function...
	I0915 06:30:55.708675   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:55.710847   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.711159   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:55.711187   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.711316   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:55.711564   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.711697   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.711844   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:55.712017   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:55.712184   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:55.712193   13942 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0915 06:30:55.812983   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:30:55.813004   13942 main.go:141] libmachine: Detecting the provisioner...
	I0915 06:30:55.813010   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:55.815500   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.815897   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:55.815925   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.816042   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:55.816221   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.816381   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.816518   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:55.816670   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:55.816829   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:55.816839   13942 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0915 06:30:55.918360   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0915 06:30:55.918439   13942 main.go:141] libmachine: found compatible host: buildroot
	I0915 06:30:55.918448   13942 main.go:141] libmachine: Provisioning with buildroot...
	I0915 06:30:55.918454   13942 main.go:141] libmachine: (addons-368929) Calling .GetMachineName
	I0915 06:30:55.918690   13942 buildroot.go:166] provisioning hostname "addons-368929"
	I0915 06:30:55.918711   13942 main.go:141] libmachine: (addons-368929) Calling .GetMachineName
	I0915 06:30:55.918840   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:55.920966   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.921446   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:55.921474   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.921659   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:55.921826   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.921967   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.922063   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:55.922230   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:55.922377   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:55.922388   13942 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-368929 && echo "addons-368929" | sudo tee /etc/hostname
	I0915 06:30:56.039825   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-368929
	
	I0915 06:30:56.039850   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.042251   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.042524   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.042543   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.042750   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.042921   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.043023   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.043132   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.043236   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:56.043381   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:56.043395   13942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-368929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-368929/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-368929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 06:30:56.154978   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:30:56.155020   13942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 06:30:56.155050   13942 buildroot.go:174] setting up certificates
	I0915 06:30:56.155069   13942 provision.go:84] configureAuth start
	I0915 06:30:56.155094   13942 main.go:141] libmachine: (addons-368929) Calling .GetMachineName
	I0915 06:30:56.155378   13942 main.go:141] libmachine: (addons-368929) Calling .GetIP
	I0915 06:30:56.157861   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.158130   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.158164   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.158372   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.160429   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.160700   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.160725   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.160840   13942 provision.go:143] copyHostCerts
	I0915 06:30:56.160923   13942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 06:30:56.161059   13942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 06:30:56.161236   13942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 06:30:56.161313   13942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.addons-368929 san=[127.0.0.1 192.168.39.212 addons-368929 localhost minikube]
	I0915 06:30:56.248249   13942 provision.go:177] copyRemoteCerts
	I0915 06:30:56.248322   13942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 06:30:56.248351   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.251283   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.251603   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.251636   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.251851   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.252026   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.252134   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.252249   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:30:56.336360   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 06:30:56.360914   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 06:30:56.385134   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 06:30:56.408123   13942 provision.go:87] duration metric: took 253.040376ms to configureAuth
	I0915 06:30:56.408147   13942 buildroot.go:189] setting minikube options for container-runtime
	I0915 06:30:56.408302   13942 config.go:182] Loaded profile config "addons-368929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:30:56.408370   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.410873   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.411209   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.411236   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.411382   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.411556   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.411726   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.411866   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.412039   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:56.412202   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:56.412215   13942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 06:30:56.625572   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 06:30:56.625596   13942 main.go:141] libmachine: Checking connection to Docker...
	I0915 06:30:56.625603   13942 main.go:141] libmachine: (addons-368929) Calling .GetURL
	I0915 06:30:56.626810   13942 main.go:141] libmachine: (addons-368929) DBG | Using libvirt version 6000000
	I0915 06:30:56.628657   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.628951   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.628973   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.629143   13942 main.go:141] libmachine: Docker is up and running!
	I0915 06:30:56.629155   13942 main.go:141] libmachine: Reticulating splines...
	I0915 06:30:56.629162   13942 client.go:171] duration metric: took 22.044062992s to LocalClient.Create
	I0915 06:30:56.629182   13942 start.go:167] duration metric: took 22.044122374s to libmachine.API.Create "addons-368929"
	I0915 06:30:56.629204   13942 start.go:293] postStartSetup for "addons-368929" (driver="kvm2")
	I0915 06:30:56.629219   13942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 06:30:56.629241   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.629436   13942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 06:30:56.629459   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.631144   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.631446   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.631469   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.631552   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.631671   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.631765   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.631918   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:30:56.712275   13942 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 06:30:56.716708   13942 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 06:30:56.716735   13942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 06:30:56.716821   13942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 06:30:56.716859   13942 start.go:296] duration metric: took 87.643981ms for postStartSetup
	I0915 06:30:56.716897   13942 main.go:141] libmachine: (addons-368929) Calling .GetConfigRaw
	I0915 06:30:56.717419   13942 main.go:141] libmachine: (addons-368929) Calling .GetIP
	I0915 06:30:56.719736   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.720131   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.720166   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.720394   13942 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/config.json ...
	I0915 06:30:56.720616   13942 start.go:128] duration metric: took 22.152940074s to createHost
	I0915 06:30:56.720641   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.722803   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.723117   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.723157   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.723308   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.723466   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.723612   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.723752   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.723900   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:56.724053   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:56.724062   13942 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 06:30:56.826287   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726381856.792100710
	
	I0915 06:30:56.826308   13942 fix.go:216] guest clock: 1726381856.792100710
	I0915 06:30:56.826317   13942 fix.go:229] Guest: 2024-09-15 06:30:56.79210071 +0000 UTC Remote: 2024-09-15 06:30:56.720628741 +0000 UTC m=+22.251007338 (delta=71.471969ms)
	I0915 06:30:56.826365   13942 fix.go:200] guest clock delta is within tolerance: 71.471969ms
	I0915 06:30:56.826373   13942 start.go:83] releasing machines lock for "addons-368929", held for 22.25878368s
	I0915 06:30:56.826395   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.826655   13942 main.go:141] libmachine: (addons-368929) Calling .GetIP
	I0915 06:30:56.828977   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.829310   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.829334   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.829599   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.830090   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.830276   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.830359   13942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 06:30:56.830415   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.830460   13942 ssh_runner.go:195] Run: cat /version.json
	I0915 06:30:56.830484   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.833094   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.833320   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.833452   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.833493   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.833613   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.833768   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.833779   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.833801   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.833988   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.833998   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.834119   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.834185   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:30:56.834246   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.834495   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:30:56.938490   13942 ssh_runner.go:195] Run: systemctl --version
	I0915 06:30:56.944445   13942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 06:30:57.102745   13942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 06:30:57.108913   13942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 06:30:57.108984   13942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:30:57.124469   13942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 06:30:57.124494   13942 start.go:495] detecting cgroup driver to use...
	I0915 06:30:57.124559   13942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 06:30:57.141386   13942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 06:30:57.155119   13942 docker.go:217] disabling cri-docker service (if available) ...
	I0915 06:30:57.155185   13942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 06:30:57.168695   13942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 06:30:57.182111   13942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 06:30:57.306290   13942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 06:30:57.442868   13942 docker.go:233] disabling docker service ...
	I0915 06:30:57.442931   13942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 06:30:57.456992   13942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 06:30:57.470375   13942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 06:30:57.613118   13942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 06:30:57.736610   13942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 06:30:57.750704   13942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:30:57.769455   13942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 06:30:57.769509   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.779795   13942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 06:30:57.779873   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.790360   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.800573   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.811474   13942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 06:30:57.822289   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.832671   13942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.849736   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.860236   13942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 06:30:57.869843   13942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0915 06:30:57.869913   13942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0915 06:30:57.883852   13942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 06:30:57.893890   13942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:58.013644   13942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 06:30:58.112843   13942 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 06:30:58.112948   13942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 06:30:58.119889   13942 start.go:563] Will wait 60s for crictl version
	I0915 06:30:58.119973   13942 ssh_runner.go:195] Run: which crictl
	I0915 06:30:58.123756   13942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 06:30:58.159622   13942 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 06:30:58.159742   13942 ssh_runner.go:195] Run: crio --version
	I0915 06:30:58.186651   13942 ssh_runner.go:195] Run: crio --version
	I0915 06:30:58.215616   13942 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 06:30:58.216928   13942 main.go:141] libmachine: (addons-368929) Calling .GetIP
	I0915 06:30:58.219246   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:58.219519   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:58.219540   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:58.219725   13942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0915 06:30:58.223999   13942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:30:58.236938   13942 kubeadm.go:883] updating cluster {Name:addons-368929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 06:30:58.237037   13942 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:30:58.237078   13942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:30:58.273590   13942 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0915 06:30:58.273648   13942 ssh_runner.go:195] Run: which lz4
	I0915 06:30:58.277802   13942 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0915 06:30:58.282345   13942 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0915 06:30:58.282370   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0915 06:30:59.603321   13942 crio.go:462] duration metric: took 1.325549194s to copy over tarball
	I0915 06:30:59.603391   13942 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0915 06:31:01.698248   13942 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.094830019s)
	I0915 06:31:01.698276   13942 crio.go:469] duration metric: took 2.094925403s to extract the tarball
	I0915 06:31:01.698286   13942 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0915 06:31:01.735576   13942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:31:01.777236   13942 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:31:01.777262   13942 cache_images.go:84] Images are preloaded, skipping loading
	I0915 06:31:01.777272   13942 kubeadm.go:934] updating node { 192.168.39.212 8443 v1.31.1 crio true true} ...
	I0915 06:31:01.777361   13942 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-368929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 06:31:01.777425   13942 ssh_runner.go:195] Run: crio config
	I0915 06:31:01.819719   13942 cni.go:84] Creating CNI manager for ""
	I0915 06:31:01.819741   13942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 06:31:01.819753   13942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 06:31:01.819775   13942 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-368929 NodeName:addons-368929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 06:31:01.819928   13942 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-368929"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 06:31:01.820001   13942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 06:31:01.830202   13942 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 06:31:01.830264   13942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 06:31:01.840653   13942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0915 06:31:01.859116   13942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 06:31:01.876520   13942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0915 06:31:01.893776   13942 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I0915 06:31:01.897643   13942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:31:01.910584   13942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:31:02.038664   13942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:31:02.055783   13942 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929 for IP: 192.168.39.212
	I0915 06:31:02.055810   13942 certs.go:194] generating shared ca certs ...
	I0915 06:31:02.055829   13942 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.055990   13942 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 06:31:02.153706   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt ...
	I0915 06:31:02.153733   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt: {Name:mk72efeae7a5e079e02dddca5ae1326e66b50791 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.153893   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key ...
	I0915 06:31:02.153904   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key: {Name:mk60adb75b67a4ecb03ce39bc98fc22d93504324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.153974   13942 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 06:31:02.294105   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt ...
	I0915 06:31:02.294129   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt: {Name:mk6ad9572391112128f71a73d401b2f36e5187ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.294270   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key ...
	I0915 06:31:02.294280   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key: {Name:mk997129f7d8042b546775ee409cc0c02ea66874 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.294341   13942 certs.go:256] generating profile certs ...
	I0915 06:31:02.294402   13942 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.key
	I0915 06:31:02.294422   13942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt with IP's: []
	I0915 06:31:02.474521   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt ...
	I0915 06:31:02.474552   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: {Name:mk5230116ec10f82362ea4d2c021febd7553501e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.474711   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.key ...
	I0915 06:31:02.474722   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.key: {Name:mk4c7cfc18d39b7a5234396e9e59579ecd48ad76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.474787   13942 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key.2bf3d68f
	I0915 06:31:02.474804   13942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt.2bf3d68f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212]
	I0915 06:31:02.564099   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt.2bf3d68f ...
	I0915 06:31:02.564130   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt.2bf3d68f: {Name:mkc23c9f9e76c0a988b86d564062dd840e1d35eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.564279   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key.2bf3d68f ...
	I0915 06:31:02.564291   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key.2bf3d68f: {Name:mk4e887c90c5c7adca7e638dabe3b3c3ddd2bf81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.564361   13942 certs.go:381] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt.2bf3d68f -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt
	I0915 06:31:02.564435   13942 certs.go:385] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key.2bf3d68f -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key
	I0915 06:31:02.564480   13942 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.key
	I0915 06:31:02.564496   13942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.crt with IP's: []
	I0915 06:31:02.689851   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.crt ...
	I0915 06:31:02.689879   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.crt: {Name:mk64a1aa0a2a68e9a444363c01c5932bf3e0851a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.690029   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.key ...
	I0915 06:31:02.690039   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.key: {Name:mk7c8d3875c49566ea32a3445025bddf158772fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.690216   13942 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 06:31:02.690247   13942 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 06:31:02.690274   13942 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 06:31:02.690296   13942 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 06:31:02.690807   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 06:31:02.716623   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 06:31:02.745150   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 06:31:02.773869   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 06:31:02.798062   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 06:31:02.820956   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 06:31:02.844972   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 06:31:02.869179   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 06:31:02.893630   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 06:31:02.917474   13942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 06:31:02.934168   13942 ssh_runner.go:195] Run: openssl version
	I0915 06:31:02.940062   13942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 06:31:02.951007   13942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:31:02.955419   13942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:31:02.955475   13942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:31:02.961175   13942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 06:31:02.972122   13942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:31:02.976566   13942 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 06:31:02.976612   13942 kubeadm.go:392] StartCluster: {Name:addons-368929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:31:02.976677   13942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 06:31:02.976718   13942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 06:31:03.012559   13942 cri.go:89] found id: ""
	I0915 06:31:03.012619   13942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 06:31:03.022968   13942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 06:31:03.032884   13942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 06:31:03.042781   13942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 06:31:03.042798   13942 kubeadm.go:157] found existing configuration files:
	
	I0915 06:31:03.042840   13942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 06:31:03.052268   13942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 06:31:03.052318   13942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 06:31:03.062232   13942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 06:31:03.071324   13942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 06:31:03.071379   13942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 06:31:03.080551   13942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 06:31:03.089375   13942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 06:31:03.089424   13942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 06:31:03.099002   13942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 06:31:03.108163   13942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 06:31:03.108213   13942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 06:31:03.117874   13942 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 06:31:03.179081   13942 kubeadm.go:310] W0915 06:31:03.150215     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:31:03.179952   13942 kubeadm.go:310] W0915 06:31:03.151258     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:31:03.288765   13942 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 06:31:13.244212   13942 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 06:31:13.244285   13942 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 06:31:13.244371   13942 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 06:31:13.244504   13942 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 06:31:13.244637   13942 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 06:31:13.244724   13942 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 06:31:13.246462   13942 out.go:235]   - Generating certificates and keys ...
	I0915 06:31:13.246540   13942 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 06:31:13.246602   13942 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 06:31:13.246676   13942 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 06:31:13.246741   13942 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 06:31:13.246798   13942 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 06:31:13.246841   13942 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 06:31:13.246910   13942 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 06:31:13.247029   13942 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-368929 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0915 06:31:13.247105   13942 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 06:31:13.247259   13942 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-368929 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0915 06:31:13.247354   13942 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 06:31:13.247454   13942 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 06:31:13.247496   13942 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 06:31:13.247569   13942 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 06:31:13.247649   13942 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 06:31:13.247737   13942 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 06:31:13.247812   13942 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 06:31:13.247905   13942 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 06:31:13.247987   13942 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 06:31:13.248103   13942 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 06:31:13.248230   13942 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 06:31:13.249711   13942 out.go:235]   - Booting up control plane ...
	I0915 06:31:13.249799   13942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 06:31:13.249895   13942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 06:31:13.249949   13942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 06:31:13.250075   13942 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 06:31:13.250170   13942 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 06:31:13.250212   13942 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 06:31:13.250324   13942 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 06:31:13.250471   13942 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 06:31:13.250554   13942 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000955995s
	I0915 06:31:13.250648   13942 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 06:31:13.250740   13942 kubeadm.go:310] [api-check] The API server is healthy after 5.001828524s
	I0915 06:31:13.250879   13942 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 06:31:13.250988   13942 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 06:31:13.251068   13942 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 06:31:13.251284   13942 kubeadm.go:310] [mark-control-plane] Marking the node addons-368929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 06:31:13.251342   13942 kubeadm.go:310] [bootstrap-token] Using token: 0sj1hx.q1rkmq819x572pmn
	I0915 06:31:13.252875   13942 out.go:235]   - Configuring RBAC rules ...
	I0915 06:31:13.253007   13942 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 06:31:13.253098   13942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 06:31:13.253263   13942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 06:31:13.253367   13942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 06:31:13.253467   13942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 06:31:13.253534   13942 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 06:31:13.253646   13942 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 06:31:13.253696   13942 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 06:31:13.253766   13942 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 06:31:13.253779   13942 kubeadm.go:310] 
	I0915 06:31:13.253880   13942 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 06:31:13.253892   13942 kubeadm.go:310] 
	I0915 06:31:13.253965   13942 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 06:31:13.253973   13942 kubeadm.go:310] 
	I0915 06:31:13.253994   13942 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 06:31:13.254066   13942 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 06:31:13.254144   13942 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 06:31:13.254155   13942 kubeadm.go:310] 
	I0915 06:31:13.254229   13942 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 06:31:13.254238   13942 kubeadm.go:310] 
	I0915 06:31:13.254305   13942 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 06:31:13.254315   13942 kubeadm.go:310] 
	I0915 06:31:13.254361   13942 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 06:31:13.254433   13942 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 06:31:13.254531   13942 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 06:31:13.254543   13942 kubeadm.go:310] 
	I0915 06:31:13.254651   13942 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 06:31:13.254721   13942 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 06:31:13.254735   13942 kubeadm.go:310] 
	I0915 06:31:13.254843   13942 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0sj1hx.q1rkmq819x572pmn \
	I0915 06:31:13.254928   13942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b \
	I0915 06:31:13.254946   13942 kubeadm.go:310] 	--control-plane 
	I0915 06:31:13.254952   13942 kubeadm.go:310] 
	I0915 06:31:13.255027   13942 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 06:31:13.255036   13942 kubeadm.go:310] 
	I0915 06:31:13.255108   13942 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0sj1hx.q1rkmq819x572pmn \
	I0915 06:31:13.255213   13942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b 
	I0915 06:31:13.255224   13942 cni.go:84] Creating CNI manager for ""
	I0915 06:31:13.255230   13942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 06:31:13.256846   13942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 06:31:13.258367   13942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 06:31:13.269533   13942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 06:31:13.286955   13942 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 06:31:13.287033   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:13.287047   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-368929 minikube.k8s.io/updated_at=2024_09_15T06_31_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=addons-368929 minikube.k8s.io/primary=true
	I0915 06:31:13.439577   13942 ops.go:34] apiserver oom_adj: -16
	I0915 06:31:13.439619   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:13.939804   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:14.440122   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:14.939768   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:15.440612   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:15.940408   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:16.439804   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:16.940340   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:17.440583   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:17.530382   13942 kubeadm.go:1113] duration metric: took 4.243409251s to wait for elevateKubeSystemPrivileges
	I0915 06:31:17.530429   13942 kubeadm.go:394] duration metric: took 14.553819023s to StartCluster
	I0915 06:31:17.530452   13942 settings.go:142] acquiring lock: {Name:mkf5235d72fa0db4ee272126c244284fe5de298a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:17.530582   13942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 06:31:17.530898   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/kubeconfig: {Name:mk08ce8dc701ab8f3c73b1f6ae730da0fbf561bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:17.531115   13942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 06:31:17.531117   13942 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:31:17.531135   13942 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 06:31:17.531245   13942 addons.go:69] Setting yakd=true in profile "addons-368929"
	I0915 06:31:17.531264   13942 addons.go:234] Setting addon yakd=true in "addons-368929"
	I0915 06:31:17.531291   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531295   13942 config.go:182] Loaded profile config "addons-368929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:31:17.531303   13942 addons.go:69] Setting ingress-dns=true in profile "addons-368929"
	I0915 06:31:17.531317   13942 addons.go:69] Setting default-storageclass=true in profile "addons-368929"
	I0915 06:31:17.531326   13942 addons.go:234] Setting addon ingress-dns=true in "addons-368929"
	I0915 06:31:17.531335   13942 addons.go:69] Setting metrics-server=true in profile "addons-368929"
	I0915 06:31:17.531338   13942 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-368929"
	I0915 06:31:17.531337   13942 addons.go:69] Setting registry=true in profile "addons-368929"
	I0915 06:31:17.531349   13942 addons.go:234] Setting addon metrics-server=true in "addons-368929"
	I0915 06:31:17.531345   13942 addons.go:69] Setting inspektor-gadget=true in profile "addons-368929"
	I0915 06:31:17.531359   13942 addons.go:234] Setting addon registry=true in "addons-368929"
	I0915 06:31:17.531366   13942 addons.go:234] Setting addon inspektor-gadget=true in "addons-368929"
	I0915 06:31:17.531374   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531366   13942 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-368929"
	I0915 06:31:17.531389   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531390   13942 addons.go:69] Setting ingress=true in profile "addons-368929"
	I0915 06:31:17.531398   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531406   13942 addons.go:234] Setting addon ingress=true in "addons-368929"
	I0915 06:31:17.531416   13942 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-368929"
	I0915 06:31:17.531429   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531441   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531763   13942 addons.go:69] Setting storage-provisioner=true in profile "addons-368929"
	I0915 06:31:17.531769   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531778   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531782   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531785   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531825   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531788   13942 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-368929"
	I0915 06:31:17.531921   13942 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-368929"
	I0915 06:31:17.531375   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531784   13942 addons.go:234] Setting addon storage-provisioner=true in "addons-368929"
	I0915 06:31:17.532163   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531796   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531381   13942 addons.go:69] Setting gcp-auth=true in profile "addons-368929"
	I0915 06:31:17.532282   13942 mustload.go:65] Loading cluster: addons-368929
	I0915 06:31:17.532299   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.532333   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.532362   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.532377   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.532462   13942 config.go:182] Loaded profile config "addons-368929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:31:17.532536   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.532574   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531801   13942 addons.go:69] Setting volcano=true in profile "addons-368929"
	I0915 06:31:17.532649   13942 addons.go:234] Setting addon volcano=true in "addons-368929"
	I0915 06:31:17.532676   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531802   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.532807   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.532834   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.533044   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531799   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.533082   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.533100   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531802   13942 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-368929"
	I0915 06:31:17.533268   13942 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-368929"
	I0915 06:31:17.533292   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531799   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.533422   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531386   13942 addons.go:69] Setting helm-tiller=true in profile "addons-368929"
	I0915 06:31:17.533579   13942 addons.go:234] Setting addon helm-tiller=true in "addons-368929"
	I0915 06:31:17.533603   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.533660   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.533677   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531808   13942 addons.go:69] Setting cloud-spanner=true in profile "addons-368929"
	I0915 06:31:17.533996   13942 addons.go:234] Setting addon cloud-spanner=true in "addons-368929"
	I0915 06:31:17.534023   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531808   13942 addons.go:69] Setting volumesnapshots=true in profile "addons-368929"
	I0915 06:31:17.534072   13942 addons.go:234] Setting addon volumesnapshots=true in "addons-368929"
	I0915 06:31:17.534098   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.534391   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.534396   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.534404   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.534410   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.544717   13942 out.go:177] * Verifying Kubernetes components...
	I0915 06:31:17.531817   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531900   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.546517   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.551069   13942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:31:17.552863   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0915 06:31:17.552873   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36587
	I0915 06:31:17.553975   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0915 06:31:17.554008   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39267
	I0915 06:31:17.554479   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.554606   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.554630   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.554982   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.555001   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.555033   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43615
	I0915 06:31:17.555190   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.555399   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.555473   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.556128   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.556141   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.556194   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.556312   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.556324   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.556379   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.556441   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.556504   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.556665   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.557213   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.557249   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.560223   13942 addons.go:234] Setting addon default-storageclass=true in "addons-368929"
	I0915 06:31:17.560260   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.560623   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.560654   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.562235   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.562259   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.562337   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.562459   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.562469   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.564071   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.564137   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.564190   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0915 06:31:17.564701   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.564732   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.565696   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.565803   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0915 06:31:17.566345   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.566413   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.566440   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.566451   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.566783   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.566811   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.568220   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.568238   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.568363   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.568373   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.568586   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.568722   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.575956   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.578834   13942 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-368929"
	I0915 06:31:17.578915   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.579206   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.579264   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.586757   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45717
	I0915 06:31:17.587453   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.587903   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.587915   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.588249   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.588667   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.588681   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.589379   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0915 06:31:17.591499   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43375
	I0915 06:31:17.592121   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.592540   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35411
	I0915 06:31:17.592775   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.592797   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.593043   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.593129   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.593632   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.593670   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.594252   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.594269   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.594288   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.594321   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.594721   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.595309   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.595327   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.595709   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.596188   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.597875   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.598751   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.599189   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.599228   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.600165   13942 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:31:17.601729   13942 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:31:17.601752   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 06:31:17.601771   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.605356   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.605714   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.605733   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.606017   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.606225   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.606363   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.606488   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.608862   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0915 06:31:17.609332   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.609839   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.609855   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.610126   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0915 06:31:17.610221   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.610370   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.610667   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.611155   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.611171   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.611594   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.612184   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.612207   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.612239   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.613742   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I0915 06:31:17.614273   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.614432   13942 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 06:31:17.614832   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.614856   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.615194   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.615706   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.615749   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.615938   13942 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:31:17.615956   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 06:31:17.615977   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.618736   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I0915 06:31:17.619406   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.619549   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.619887   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.619906   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.619934   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I0915 06:31:17.619991   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.620005   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.620125   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.620284   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.620306   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.620389   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.620439   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.620546   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.620912   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.620929   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.621094   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.621127   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.621225   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.621390   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.623143   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.624009   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0915 06:31:17.624078   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0915 06:31:17.624757   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.625302   13942 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 06:31:17.625323   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.625341   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.625640   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.626189   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.626227   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.626492   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0915 06:31:17.626724   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.627122   13942 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 06:31:17.627137   13942 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 06:31:17.627150   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.627443   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.627842   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.627858   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.628226   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.628780   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.628824   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.629909   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0915 06:31:17.630269   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.630711   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.630727   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.630778   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.630930   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.630947   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.631293   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.631304   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.631320   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.631317   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.631497   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.631668   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.632017   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.632057   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.632337   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.632451   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.635887   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I0915 06:31:17.636212   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.636655   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.636671   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.637236   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.637272   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.637490   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.637666   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.639294   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.641479   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 06:31:17.642960   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 06:31:17.642978   13942 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 06:31:17.643001   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.646117   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.646502   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.646522   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.646795   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.647022   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.647177   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.647337   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.650261   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38061
	I0915 06:31:17.652110   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0915 06:31:17.652286   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43539
	I0915 06:31:17.652480   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0915 06:31:17.652627   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.652721   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.653099   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.653125   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.653192   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.653334   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.653346   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.653410   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.653645   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.653768   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.653779   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.655709   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45143
	I0915 06:31:17.655715   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.655739   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.655715   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.655788   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.655803   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.655938   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.656099   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.656181   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.656265   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.656670   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.656688   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.656739   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.658305   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.658369   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.658421   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.659062   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.659317   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.660430   13942 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0915 06:31:17.660468   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 06:31:17.660448   13942 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 06:31:17.660836   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.661129   13942 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 06:31:17.661142   13942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 06:31:17.661158   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.661714   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0915 06:31:17.662496   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.662785   13942 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0915 06:31:17.662803   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0915 06:31:17.662819   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.663231   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.663251   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.663851   13942 out.go:177]   - Using image docker.io/busybox:stable
	I0915 06:31:17.663851   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 06:31:17.665526   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 06:31:17.665540   13942 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:31:17.665573   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 06:31:17.665590   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.666518   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.667159   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.667209   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.667518   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.667975   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 06:31:17.668218   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.668405   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.668924   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.668959   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.669158   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.669315   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.669371   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.669386   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.669496   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.669832   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.670044   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.670171   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.670275   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.670559   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 06:31:17.671441   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.672225   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.672238   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.672404   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.672567   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.672724   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.672859   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.673060   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32869
	I0915 06:31:17.673167   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36501
	I0915 06:31:17.673197   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 06:31:17.673464   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.673593   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.674180   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.674197   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.674602   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.674866   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.674970   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0915 06:31:17.676021   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.676113   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0915 06:31:17.676325   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.676341   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.676424   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0915 06:31:17.676962   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.677312   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.677398   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.677414   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.677562   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.677584   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.677859   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.678040   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.678549   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I0915 06:31:17.679078   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.679084   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0915 06:31:17.679106   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.679181   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.679630   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.679647   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.679655   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 06:31:17.679706   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.679708   13942 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 06:31:17.679755   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.679825   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.679985   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.680370   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.680389   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.680459   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.680667   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.680925   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.681204   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 06:31:17.681687   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 06:31:17.681708   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.681215   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.681887   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.682415   13942 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 06:31:17.682606   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.682597   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.682703   13942 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 06:31:17.683242   13942 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 06:31:17.684048   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 06:31:17.684064   13942 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 06:31:17.684082   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.684243   13942 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 06:31:17.684541   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.685180   13942 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 06:31:17.685283   13942 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 06:31:17.685293   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 06:31:17.685309   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.685970   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.687103   13942 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 06:31:17.687121   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 06:31:17.687139   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.687240   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.687254   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.687302   13942 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 06:31:17.687385   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.687553   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.687909   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.688208   13942 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:31:17.688311   13942 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 06:31:17.688326   13942 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 06:31:17.688342   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.688954   13942 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:31:17.688971   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 06:31:17.688986   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.689661   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.689991   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.690025   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.690043   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.690736   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.691322   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.691400   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.691795   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.691992   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.692014   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.692081   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.692213   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.692319   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.692403   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.692846   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.693446   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.694070   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.694103   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.694326   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.694567   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.694594   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.694685   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.694776   13942 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:31:17.694916   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.694936   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.694974   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.695197   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.695468   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.695660   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.695792   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.696758   13942 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:31:17.696772   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 06:31:17.696794   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.696898   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0915 06:31:17.697337   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.697347   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.697891   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.697904   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.698246   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.698537   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.698553   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.698595   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.698766   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.698883   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.698993   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.699115   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.699868   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.700039   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.700238   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:17.700244   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:17.700527   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.700539   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.700557   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:17.700564   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:17.700571   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:17.700585   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:17.700712   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.700759   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:17.700775   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:17.700779   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	W0915 06:31:17.700830   13942 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0915 06:31:17.701036   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.701127   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.701199   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.974082   13942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:31:17.974246   13942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 06:31:18.029440   13942 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0915 06:31:18.029460   13942 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0915 06:31:18.067088   13942 node_ready.go:35] waiting up to 6m0s for node "addons-368929" to be "Ready" ...
	I0915 06:31:18.078224   13942 node_ready.go:49] node "addons-368929" has status "Ready":"True"
	I0915 06:31:18.078251   13942 node_ready.go:38] duration metric: took 11.135756ms for node "addons-368929" to be "Ready" ...
	I0915 06:31:18.078264   13942 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:31:18.135940   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 06:31:18.135964   13942 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 06:31:18.141367   13942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:18.199001   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:31:18.204686   13942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 06:31:18.204710   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 06:31:18.222305   13942 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 06:31:18.222333   13942 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 06:31:18.235001   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:31:18.242915   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:31:18.264618   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 06:31:18.264645   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 06:31:18.278064   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:31:18.295028   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:31:18.313913   13942 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 06:31:18.313945   13942 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 06:31:18.321100   13942 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:31:18.321126   13942 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0915 06:31:18.324341   13942 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 06:31:18.324361   13942 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 06:31:18.342086   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:31:18.355928   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 06:31:18.386848   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 06:31:18.386873   13942 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 06:31:18.430309   13942 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 06:31:18.430338   13942 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 06:31:18.436199   13942 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:31:18.436227   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 06:31:18.467018   13942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 06:31:18.467043   13942 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 06:31:18.469097   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 06:31:18.469118   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 06:31:18.475758   13942 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 06:31:18.475776   13942 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 06:31:18.524849   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:31:18.559766   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 06:31:18.559796   13942 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 06:31:18.574119   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:31:18.629489   13942 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 06:31:18.629514   13942 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 06:31:18.636860   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 06:31:18.636883   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 06:31:18.656652   13942 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 06:31:18.656681   13942 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 06:31:18.671346   13942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:31:18.671371   13942 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 06:31:18.776151   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:31:18.776174   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 06:31:18.786697   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 06:31:18.786725   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 06:31:18.790802   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 06:31:18.790824   13942 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 06:31:18.811252   13942 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 06:31:18.811276   13942 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 06:31:18.841135   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:31:18.940848   13942 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:31:18.940871   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 06:31:18.948147   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:31:18.968172   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 06:31:18.968200   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 06:31:19.099306   13942 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 06:31:19.099337   13942 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 06:31:19.208753   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:31:19.261571   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 06:31:19.261592   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 06:31:19.427555   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 06:31:19.427591   13942 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 06:31:19.452460   13942 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 06:31:19.452489   13942 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 06:31:19.729819   13942 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.75553178s)
	I0915 06:31:19.729857   13942 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0915 06:31:19.729914   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.530876961s)
	I0915 06:31:19.729955   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:19.729966   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:19.730363   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:19.730385   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:19.730385   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:19.730403   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:19.730418   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:19.730721   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:19.730736   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:19.737048   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:19.737066   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:19.737366   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:19.737390   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:19.737396   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:19.835914   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 06:31:19.835934   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 06:31:19.848468   13942 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:31:19.848493   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 06:31:20.068594   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:31:20.139377   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 06:31:20.139404   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 06:31:20.147456   13942 pod_ready.go:103] pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:20.234504   13942 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-368929" context rescaled to 1 replicas
	I0915 06:31:20.491704   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:31:20.491730   13942 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 06:31:20.932400   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:31:22.212244   13942 pod_ready.go:103] pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:22.409208   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.174166978s)
	I0915 06:31:22.409210   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.166269282s)
	I0915 06:31:22.409299   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:22.409318   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:22.409257   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:22.409391   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:22.409620   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:22.409658   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:22.409665   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:22.409672   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:22.409678   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:22.409744   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:22.409768   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:22.409783   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:22.409793   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:22.409801   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:22.410154   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:22.410195   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:22.410199   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:22.410217   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:22.410251   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:22.410221   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:24.154654   13942 pod_ready.go:93] pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:24.154684   13942 pod_ready.go:82] duration metric: took 6.01329144s for pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:24.154696   13942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:24.756169   13942 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 06:31:24.756215   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:24.759593   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:24.760038   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:24.760065   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:24.760279   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:24.760520   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:24.760709   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:24.760868   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:25.159761   13942 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 06:31:25.482013   13942 addons.go:234] Setting addon gcp-auth=true in "addons-368929"
	I0915 06:31:25.482064   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:25.482369   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:25.482396   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:25.497336   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I0915 06:31:25.497758   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:25.498209   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:25.498231   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:25.498517   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:25.499067   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:25.499103   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:25.514609   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39641
	I0915 06:31:25.515143   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:25.515688   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:25.515716   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:25.516029   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:25.516249   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:25.517863   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:25.518086   13942 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 06:31:25.518112   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:25.520701   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:25.521094   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:25.521124   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:25.521252   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:25.521421   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:25.521577   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:25.521709   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:26.232203   13942 pod_ready.go:103] pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:26.243417   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.965315482s)
	I0915 06:31:26.243453   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.948392742s)
	I0915 06:31:26.243471   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243480   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243483   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243491   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243629   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.901516275s)
	I0915 06:31:26.243667   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243675   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.887721395s)
	I0915 06:31:26.243697   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243713   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243752   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.718870428s)
	I0915 06:31:26.243780   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243794   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243853   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.243869   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.243874   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.669731672s)
	I0915 06:31:26.243878   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243886   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243891   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243899   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243677   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243962   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.243992   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.243998   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.244005   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244011   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244024   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.402863813s)
	I0915 06:31:26.244039   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244047   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244076   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.244093   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.244094   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.244103   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.244111   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244115   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.244121   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.244127   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.295952452s)
	I0915 06:31:26.244138   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244145   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244147   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244155   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244156   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244249   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.035463778s)
	W0915 06:31:26.244279   13942 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:31:26.244307   13942 retry.go:31] will retry after 256.93896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:31:26.244415   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.175791562s)
	I0915 06:31:26.244434   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244443   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.245740   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.245773   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.245783   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.245793   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.245803   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.245868   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.245878   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.245886   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.245892   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.245938   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.245963   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.245982   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.245990   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.245997   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.246004   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.246041   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246060   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246066   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246295   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246321   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246328   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246504   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246547   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246537   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246564   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246564   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246583   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246589   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246624   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246635   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246645   13942 addons.go:475] Verifying addon registry=true in "addons-368929"
	I0915 06:31:26.246763   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246789   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246797   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246808   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.246818   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.246946   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246973   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246979   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246987   13942 addons.go:475] Verifying addon metrics-server=true in "addons-368929"
	I0915 06:31:26.247083   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.247110   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.247120   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.248059   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.248078   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.248087   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.248095   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.248285   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.248299   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.248307   13942 addons.go:475] Verifying addon ingress=true in "addons-368929"
	I0915 06:31:26.248402   13942 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-368929 service yakd-dashboard -n yakd-dashboard
	
	I0915 06:31:26.248878   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.248901   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.250860   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.251360   13942 out.go:177] * Verifying registry addon...
	I0915 06:31:26.252258   13942 out.go:177] * Verifying ingress addon...
	I0915 06:31:26.253897   13942 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 06:31:26.254716   13942 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 06:31:26.282507   13942 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 06:31:26.282535   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:26.283231   13942 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:31:26.283254   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:26.326048   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.326076   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.326366   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.326389   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.502303   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:31:26.763104   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:26.763404   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:27.464589   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:27.465574   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:27.760221   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:27.760580   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.262507   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:28.263438   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.687007   13942 pod_ready.go:103] pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:28.777944   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:28.778464   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.790673   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.858215393s)
	I0915 06:31:28.790714   13942 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.272605642s)
	I0915 06:31:28.790731   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:28.790749   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:28.790820   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.288483379s)
	I0915 06:31:28.790865   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:28.790883   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:28.791037   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:28.791080   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:28.791088   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:28.791096   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:28.791102   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:28.791119   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:28.791129   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:28.791137   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:28.791143   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:28.791312   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:28.791359   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:28.791365   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:28.791374   13942 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-368929"
	I0915 06:31:28.791536   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:28.791550   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:28.792735   13942 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:31:28.793437   13942 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 06:31:28.795140   13942 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 06:31:28.795935   13942 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 06:31:28.796597   13942 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 06:31:28.796611   13942 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 06:31:28.830229   13942 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:31:28.830253   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:28.871919   13942 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 06:31:28.871943   13942 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 06:31:28.958746   13942 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:31:28.958766   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 06:31:28.979296   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:31:29.260856   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:29.260969   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:29.300857   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:29.763057   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:29.763185   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:29.815747   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.011418   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.032085812s)
	I0915 06:31:30.011471   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:30.011485   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:30.011741   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:30.011804   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:30.011820   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:30.011832   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:30.011842   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:30.012069   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:30.012085   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:30.014149   13942 addons.go:475] Verifying addon gcp-auth=true in "addons-368929"
	I0915 06:31:30.015992   13942 out.go:177] * Verifying gcp-auth addon...
	I0915 06:31:30.018271   13942 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 06:31:30.051440   13942 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:31:30.051458   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:30.261829   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:30.261988   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:30.302477   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.525517   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:30.658488   13942 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-xbx5t" not found
	I0915 06:31:30.658511   13942 pod_ready.go:82] duration metric: took 6.503808371s for pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace to be "Ready" ...
	E0915 06:31:30.658521   13942 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-xbx5t" not found
	I0915 06:31:30.658528   13942 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.665242   13942 pod_ready.go:93] pod "etcd-addons-368929" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.665263   13942 pod_ready.go:82] duration metric: took 6.72824ms for pod "etcd-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.665272   13942 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.671635   13942 pod_ready.go:93] pod "kube-apiserver-addons-368929" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.671653   13942 pod_ready.go:82] duration metric: took 6.375828ms for pod "kube-apiserver-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.671661   13942 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.678724   13942 pod_ready.go:93] pod "kube-controller-manager-addons-368929" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.678750   13942 pod_ready.go:82] duration metric: took 7.08028ms for pod "kube-controller-manager-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.678762   13942 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ldpsk" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.687370   13942 pod_ready.go:93] pod "kube-proxy-ldpsk" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.687396   13942 pod_ready.go:82] duration metric: took 8.62656ms for pod "kube-proxy-ldpsk" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.687405   13942 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.767076   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:30.767584   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:30.800983   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.859527   13942 pod_ready.go:93] pod "kube-scheduler-addons-368929" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.859556   13942 pod_ready.go:82] duration metric: took 172.143761ms for pod "kube-scheduler-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.859566   13942 pod_ready.go:39] duration metric: took 12.781287726s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:31:30.859585   13942 api_server.go:52] waiting for apiserver process to appear ...
	I0915 06:31:30.859643   13942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:31:30.917869   13942 api_server.go:72] duration metric: took 13.386663133s to wait for apiserver process to appear ...
	I0915 06:31:30.917897   13942 api_server.go:88] waiting for apiserver healthz status ...
	I0915 06:31:30.917922   13942 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0915 06:31:30.923875   13942 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I0915 06:31:30.924981   13942 api_server.go:141] control plane version: v1.31.1
	I0915 06:31:30.924999   13942 api_server.go:131] duration metric: took 7.095604ms to wait for apiserver health ...
	I0915 06:31:30.925006   13942 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 06:31:31.022799   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:31.064433   13942 system_pods.go:59] 18 kube-system pods found
	I0915 06:31:31.064467   13942 system_pods.go:61] "coredns-7c65d6cfc9-d42kz" [df259178-5edc-4af0-97ba-206daeab8c29] Running
	I0915 06:31:31.064479   13942 system_pods.go:61] "csi-hostpath-attacher-0" [0adda2d4-063c-4794-8f6b-ea93890a4674] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 06:31:31.064489   13942 system_pods.go:61] "csi-hostpath-resizer-0" [54b009bd-6cc0-49e7-82a2-9f7cf160569b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 06:31:31.064500   13942 system_pods.go:61] "csi-hostpathplugin-lsgqp" [7794aa6e-993e-4625-8fe9-562208645794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 06:31:31.064508   13942 system_pods.go:61] "etcd-addons-368929" [fd2748fc-bfea-4a7f-891d-99077f8233bf] Running
	I0915 06:31:31.064514   13942 system_pods.go:61] "kube-apiserver-addons-368929" [8ecbb12d-50b4-4d33-be92-d1430dbb9b31] Running
	I0915 06:31:31.064522   13942 system_pods.go:61] "kube-controller-manager-addons-368929" [966825ec-c456-4f8d-bb17-345e7ea3f48c] Running
	I0915 06:31:31.064529   13942 system_pods.go:61] "kube-ingress-dns-minikube" [ba1fa65c-7021-4ddf-a816-9f840f28af7d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0915 06:31:31.064539   13942 system_pods.go:61] "kube-proxy-ldpsk" [a2b364d0-170c-491f-a76a-1a9aac8268d1] Running
	I0915 06:31:31.064543   13942 system_pods.go:61] "kube-scheduler-addons-368929" [02b92939-9320-46e0-8afd-1f22d86465db] Running
	I0915 06:31:31.064549   13942 system_pods.go:61] "metrics-server-84c5f94fbc-2pshh" [0443fc45-c95c-4fab-9dfe-a1b598ac6c8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 06:31:31.064555   13942 system_pods.go:61] "nvidia-device-plugin-daemonset-kl795" [d0981521-b267-4cf9-82e3-73ca27f55631] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0915 06:31:31.064560   13942 system_pods.go:61] "registry-66c9cd494c-hbp2b" [29e66421-b96f-416d-b126-9c3b0d11bc7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0915 06:31:31.064566   13942 system_pods.go:61] "registry-proxy-ncp27" [cb62ce46-b1f1-4fef-8ada-8ff4f0dc35ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 06:31:31.064574   13942 system_pods.go:61] "snapshot-controller-56fcc65765-gpfpd" [b21fd3c8-1828-47d4-8c9d-3281ea26cc2e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:31.064586   13942 system_pods.go:61] "snapshot-controller-56fcc65765-nj866" [364b2721-2e61-435f-b087-0c183c2e9c65] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:31.064592   13942 system_pods.go:61] "storage-provisioner" [bf2fb433-e07a-4c6e-8438-67625e0215a8] Running
	I0915 06:31:31.064604   13942 system_pods.go:61] "tiller-deploy-b48cc5f79-cw67q" [6012a392-8d4a-4d69-a877-31fa7f992089] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0915 06:31:31.064613   13942 system_pods.go:74] duration metric: took 139.600952ms to wait for pod list to return data ...
	I0915 06:31:31.064626   13942 default_sa.go:34] waiting for default service account to be created ...
	I0915 06:31:31.258650   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:31.259446   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:31.259836   13942 default_sa.go:45] found service account: "default"
	I0915 06:31:31.259856   13942 default_sa.go:55] duration metric: took 195.22286ms for default service account to be created ...
	I0915 06:31:31.259867   13942 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 06:31:31.300588   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:31.464010   13942 system_pods.go:86] 18 kube-system pods found
	I0915 06:31:31.464039   13942 system_pods.go:89] "coredns-7c65d6cfc9-d42kz" [df259178-5edc-4af0-97ba-206daeab8c29] Running
	I0915 06:31:31.464047   13942 system_pods.go:89] "csi-hostpath-attacher-0" [0adda2d4-063c-4794-8f6b-ea93890a4674] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 06:31:31.464055   13942 system_pods.go:89] "csi-hostpath-resizer-0" [54b009bd-6cc0-49e7-82a2-9f7cf160569b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 06:31:31.464062   13942 system_pods.go:89] "csi-hostpathplugin-lsgqp" [7794aa6e-993e-4625-8fe9-562208645794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 06:31:31.464067   13942 system_pods.go:89] "etcd-addons-368929" [fd2748fc-bfea-4a7f-891d-99077f8233bf] Running
	I0915 06:31:31.464072   13942 system_pods.go:89] "kube-apiserver-addons-368929" [8ecbb12d-50b4-4d33-be92-d1430dbb9b31] Running
	I0915 06:31:31.464079   13942 system_pods.go:89] "kube-controller-manager-addons-368929" [966825ec-c456-4f8d-bb17-345e7ea3f48c] Running
	I0915 06:31:31.464086   13942 system_pods.go:89] "kube-ingress-dns-minikube" [ba1fa65c-7021-4ddf-a816-9f840f28af7d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0915 06:31:31.464098   13942 system_pods.go:89] "kube-proxy-ldpsk" [a2b364d0-170c-491f-a76a-1a9aac8268d1] Running
	I0915 06:31:31.464106   13942 system_pods.go:89] "kube-scheduler-addons-368929" [02b92939-9320-46e0-8afd-1f22d86465db] Running
	I0915 06:31:31.464114   13942 system_pods.go:89] "metrics-server-84c5f94fbc-2pshh" [0443fc45-c95c-4fab-9dfe-a1b598ac6c8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 06:31:31.464127   13942 system_pods.go:89] "nvidia-device-plugin-daemonset-kl795" [d0981521-b267-4cf9-82e3-73ca27f55631] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0915 06:31:31.464136   13942 system_pods.go:89] "registry-66c9cd494c-hbp2b" [29e66421-b96f-416d-b126-9c3b0d11bc7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0915 06:31:31.464145   13942 system_pods.go:89] "registry-proxy-ncp27" [cb62ce46-b1f1-4fef-8ada-8ff4f0dc35ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 06:31:31.464153   13942 system_pods.go:89] "snapshot-controller-56fcc65765-gpfpd" [b21fd3c8-1828-47d4-8c9d-3281ea26cc2e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:31.464161   13942 system_pods.go:89] "snapshot-controller-56fcc65765-nj866" [364b2721-2e61-435f-b087-0c183c2e9c65] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:31.464166   13942 system_pods.go:89] "storage-provisioner" [bf2fb433-e07a-4c6e-8438-67625e0215a8] Running
	I0915 06:31:31.464172   13942 system_pods.go:89] "tiller-deploy-b48cc5f79-cw67q" [6012a392-8d4a-4d69-a877-31fa7f992089] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0915 06:31:31.464181   13942 system_pods.go:126] duration metric: took 204.307671ms to wait for k8s-apps to be running ...
	I0915 06:31:31.464191   13942 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 06:31:31.464244   13942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:31:31.486956   13942 system_svc.go:56] duration metric: took 22.754715ms WaitForService to wait for kubelet
	I0915 06:31:31.486990   13942 kubeadm.go:582] duration metric: took 13.955789555s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:31:31.487013   13942 node_conditions.go:102] verifying NodePressure condition ...
	I0915 06:31:31.522077   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:31.659879   13942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 06:31:31.659920   13942 node_conditions.go:123] node cpu capacity is 2
	I0915 06:31:31.659934   13942 node_conditions.go:105] duration metric: took 172.914644ms to run NodePressure ...
	I0915 06:31:31.659947   13942 start.go:241] waiting for startup goroutines ...
	I0915 06:31:31.759750   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:31.760177   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:31.800755   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:32.021954   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:32.259791   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:32.260569   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:32.300924   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:32.522475   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:32.759438   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:32.759934   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:32.800621   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:33.172220   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:33.271906   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:33.272260   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:33.302687   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:33.522439   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:33.763498   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:33.764289   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:33.801429   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:34.023038   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:34.259772   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:34.260041   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:34.300561   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:34.521913   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:34.759623   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:34.759710   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:34.800723   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:35.021779   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:35.260351   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:35.260447   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:35.299779   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:35.521913   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:35.760515   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:35.760927   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:35.800167   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:36.022203   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:36.257726   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:36.259665   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:36.299888   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:36.522528   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:36.758673   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:36.760425   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:36.801181   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:37.022185   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:37.258988   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:37.259048   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:37.300658   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:37.522233   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:37.757443   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:37.758723   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:37.800691   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:38.022095   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:38.257419   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:38.259009   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:38.300410   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:38.522197   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:38.757617   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:38.759144   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:38.800893   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:39.022318   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:39.261103   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:39.261240   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:39.300803   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:39.521354   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:39.759863   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:39.760107   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:39.802301   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:40.022269   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:40.257834   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:40.262295   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:40.300771   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:40.522661   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:40.759261   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:40.759486   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:40.801798   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:41.021829   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:41.289792   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:41.289896   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:41.301063   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:41.521512   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:41.761098   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:41.761110   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:41.801396   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:42.416726   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:42.417219   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:42.417240   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:42.417651   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:42.522481   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:42.760002   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:42.760206   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:42.801257   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:43.022267   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:43.257969   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:43.260312   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:43.304149   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:43.522666   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:43.759718   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:43.761579   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:43.800010   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:44.021599   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:44.258922   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:44.259066   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:44.300086   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:44.521602   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:44.758715   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:44.759687   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:44.801888   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:45.022545   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:45.258928   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:45.260028   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:45.300426   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:45.522347   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:45.757677   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:45.759429   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:45.801059   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:46.023666   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:46.259131   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:46.259319   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:46.301039   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:46.521574   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:46.758246   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:46.759289   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:46.800758   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:47.022872   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:47.700346   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:47.701683   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:47.701903   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:47.702433   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:47.759173   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:47.759895   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:47.861235   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:48.021603   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:48.259458   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:48.259485   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:48.300707   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:48.522271   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:48.762907   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:48.763255   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:48.800498   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:49.022348   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:49.257789   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:49.258990   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:49.300932   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:49.521296   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:49.759707   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:49.760030   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:49.801156   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:50.021582   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:50.259593   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:50.259614   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:50.300101   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:50.522458   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:50.758309   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:50.759307   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:50.801005   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:51.021667   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:51.258800   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:51.259754   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:51.300360   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:51.522137   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:51.916983   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:51.918391   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:51.918708   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:52.022345   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:52.257769   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:52.259200   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:52.300612   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:52.522624   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:52.759128   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:52.760003   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:52.800696   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:53.022034   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:53.258260   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:53.259030   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:53.299898   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:53.522948   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:53.758046   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:53.759190   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:53.801909   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:54.022611   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:54.258314   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:54.259394   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:54.299868   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:54.522225   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:54.759462   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:54.759954   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:54.800966   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:55.021560   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:55.259668   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:55.260096   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:55.300543   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:55.522930   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:55.759164   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:55.759630   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:55.800281   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:56.023274   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:56.258687   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:56.258983   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:56.300450   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:56.521941   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:56.758690   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:56.759184   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:56.800444   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:57.022085   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:57.328096   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:57.328128   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:57.328468   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:57.522064   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:57.758754   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:57.761358   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:57.801386   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:58.022197   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:58.259116   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:58.259355   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:58.301472   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:58.522238   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:58.757647   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:58.759138   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:58.800143   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:59.021428   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:59.259139   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:59.259914   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:59.300195   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:59.521969   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:59.757634   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:59.759388   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:59.801766   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:00.310029   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:00.310485   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:00.310541   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:00.310734   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:00.522275   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:00.757676   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:00.759851   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:00.800259   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:01.022105   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.263670   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:01.264256   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:01.363605   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:01.522274   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.758855   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:01.759192   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:01.800380   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:02.022392   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:02.258770   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:02.258779   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:02.300507   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:02.523063   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:02.757767   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:02.759609   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:02.800172   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:03.024853   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.258447   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:03.260135   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:03.301456   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:03.521270   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.759277   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:03.759579   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:03.859786   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:04.023200   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:04.259308   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:04.259454   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:04.302238   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:04.524167   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:04.759036   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:04.759483   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:04.800855   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:05.022461   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:05.257848   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:05.259070   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:05.300542   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:05.522141   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:05.757343   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:05.759078   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:05.800257   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:06.021588   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:06.259151   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:06.259229   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:06.301635   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:06.522501   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:06.760161   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:06.760475   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:06.800547   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:07.022162   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:07.260554   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:07.260733   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:07.300362   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:07.524441   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:07.757879   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:07.759690   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:07.799841   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:08.022590   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:08.258771   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:08.261346   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:08.300492   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:08.521937   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:08.760065   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:08.760608   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:08.800923   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:09.023054   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:09.258254   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:09.261196   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:09.303211   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:09.521992   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:09.759542   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:09.759968   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:09.800419   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:10.022241   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:10.257256   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:10.259665   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:10.301095   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:10.522381   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:10.758339   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:10.760016   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:10.800973   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:11.022131   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:11.257766   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:11.259848   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:11.300515   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:11.522584   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:11.759504   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:11.759819   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:11.800734   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:12.022702   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:12.259127   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:12.259205   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:12.301248   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:12.522307   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:12.759373   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:12.759784   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:12.800790   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:13.022473   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:13.258088   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:13.259484   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:13.301523   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:13.522640   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:13.760074   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:13.760590   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:13.861156   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:14.021516   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:14.259488   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:14.259642   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:14.300721   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:14.522807   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:14.778229   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:14.779139   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:14.873475   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:15.022821   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:15.259680   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:15.259809   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:15.300641   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:15.521637   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:15.758806   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:15.759633   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:15.800222   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:16.021553   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:16.259499   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:16.259517   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:16.299855   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:16.522762   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:16.759439   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:16.759858   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:16.800448   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:17.022916   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:17.269753   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:17.273875   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:17.311380   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:17.521792   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:17.757420   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:17.760061   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:17.800763   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:18.022671   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:18.260927   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:18.261314   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:18.360499   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:18.522431   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:18.758039   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:18.759972   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:18.800762   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:19.021770   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:19.258785   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:19.258915   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:19.300433   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:19.522477   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:19.758545   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:19.758909   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:19.799951   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:20.021583   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:20.258286   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:20.259349   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:20.300404   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:20.522162   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:20.757244   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:20.760035   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:20.800381   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:21.022666   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:21.259375   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:21.259813   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:21.299671   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:21.522782   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:21.759071   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:21.759579   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:21.801715   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:22.022489   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:22.258632   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:22.258786   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:22.301546   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:22.521535   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:22.757652   13942 kapi.go:107] duration metric: took 56.503752424s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 06:32:22.759703   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:22.800194   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:23.021373   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:23.259556   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:23.300956   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:23.522488   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:23.759651   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:23.950468   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:24.021780   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:24.259077   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:24.300587   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:24.522714   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:24.759126   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:24.801761   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:25.021962   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:25.258702   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:25.300610   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:25.527977   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:25.758500   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:25.801128   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:26.024917   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:26.258889   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:26.300533   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:26.531719   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:26.760215   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:26.861604   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:27.022469   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:27.259796   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:27.301694   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:27.522577   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:27.759608   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:27.799769   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:28.022221   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:28.260134   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:28.362251   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:28.522884   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:28.758529   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:28.800998   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:29.021597   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:29.260071   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:29.300411   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:29.521843   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:29.759942   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:29.808216   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:30.025869   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:30.258745   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:30.300960   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:30.526667   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:30.761078   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:30.808613   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:31.023050   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:31.258854   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:31.300480   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:31.522174   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:31.761507   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:31.800897   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:32.022757   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:32.261197   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:32.301193   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:32.522071   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:32.762443   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:32.801404   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:33.021999   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:33.260491   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:33.300695   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:33.525170   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:33.769640   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:33.868134   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:34.022189   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:34.260688   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:34.360810   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:34.525722   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:34.766523   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:34.805396   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:35.030161   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:35.258936   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:35.300824   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:35.522082   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:35.758581   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:35.801492   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:36.021288   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:36.259323   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:36.300415   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:36.522271   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:36.761188   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:36.800799   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:37.022023   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:37.262566   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:37.300820   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:37.522925   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:37.758831   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:37.799987   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:38.022158   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:38.260608   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:38.362196   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:38.521238   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:38.999060   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:38.999332   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:39.100770   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:39.267733   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:39.304254   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:39.527622   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:39.759148   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:39.801011   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:40.023997   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:40.258867   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:40.301651   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:40.521565   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:40.759515   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:40.800939   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:41.022706   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:41.259458   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:41.301688   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:41.806497   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:41.811813   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:41.812222   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:42.023382   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:42.267386   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:42.367885   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:42.525013   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:42.759644   13942 kapi.go:107] duration metric: took 1m16.504925037s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 06:32:42.800316   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:43.022950   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:43.300696   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:43.521739   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:43.802846   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:44.022227   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:44.300361   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:44.522479   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:44.802449   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:45.022566   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:45.300843   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:45.522072   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:45.800593   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:46.022008   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:46.301212   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:46.521319   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:46.800712   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:47.022599   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:47.301146   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:47.522228   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:47.801980   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:48.021550   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:48.301089   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:48.521254   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:48.802057   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:49.022313   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:49.307681   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:49.522886   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:49.803712   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:50.022128   13942 kapi.go:107] duration metric: took 1m20.003852984s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 06:32:50.023467   13942 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-368929 cluster.
	I0915 06:32:50.024716   13942 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 06:32:50.025878   13942 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 06:32:50.304584   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:50.803369   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:51.300707   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:51.801178   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:52.301423   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:52.801624   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:53.532327   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:53.810592   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:54.301743   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:54.800975   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:55.300394   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:55.800085   13942 kapi.go:107] duration metric: took 1m27.004147412s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 06:32:55.802070   13942 out.go:177] * Enabled addons: default-storageclass, ingress-dns, storage-provisioner, nvidia-device-plugin, helm-tiller, cloud-spanner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0915 06:32:55.803500   13942 addons.go:510] duration metric: took 1m38.272362908s for enable addons: enabled=[default-storageclass ingress-dns storage-provisioner nvidia-device-plugin helm-tiller cloud-spanner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0915 06:32:55.803536   13942 start.go:246] waiting for cluster config update ...
	I0915 06:32:55.803553   13942 start.go:255] writing updated cluster config ...
	I0915 06:32:55.803803   13942 ssh_runner.go:195] Run: rm -f paused
	I0915 06:32:55.854452   13942 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 06:32:55.856106   13942 out.go:177] * Done! kubectl is now configured to use "addons-368929" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.313597270Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382669313569635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=357546a7-04bf-430a-8e01-9440ec6f7fef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.314205732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=273d4fe1-353c-4be9-afd2-d64466e84bfd name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.314284113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=273d4fe1-353c-4be9-afd2-d64466e84bfd name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.314670534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a998d8313e4f4b8762abf2436fe33923f8d0e8a3f48bf7e57874970e79a2f66,PodSandboxId:02a0dd44ead5240dab2e32893461095a2b4ff513c331788c3ef3b69a1c50782e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726382662510449826,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hbbg7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b923ad3-faa7-4dfa-8a1a-08ec7a851fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c6d745c3b5a950a7e3fbd43c2b7cfac54fea1278eba98b4dc91d0c78dc22af,PodSandboxId:56736db040b57431c2733303cf35ebe8f1fc747d2ab54ad0fa8f6dac00f4ba5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726382521302371284,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03db1b25-54f4-4882-85e5-a3edf2b37fd6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73,PodSandboxId:4805b7ff0a6b13511a7dffc70f7fc5eabcad01548e746b45625d996cf9586d5f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726381968716024437,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-g2rmd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c6416c0e-a192-4454-a335-5c49f36ea19b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f89afc4680145c4669218863eb9115877d1c2a0299b1adad8a304633834b036c,PodSandboxId:8489a665b46d5be7194ece239a3e351b4db93e93d45e4be66f6493e37801900f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948279506841,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-dd66v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e5a46bbd-02be-4c1f-aebb-00b53cf4c067,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20da8da7f0f5d980cd277a4df22df38e5e008aec108fbe15b44bf3378658b2a8,PodSandboxId:b5070610beb19bb8e2306348bcc578fc4045be505e52b56b7a20975f6dab4f8b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948158370782,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9mn4k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1086452a-e1cb-4387-bec2-242bcb5c68dc,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e762ef5d36b86f59e2d6a45c0c177b5566d12ec93c6b51499fcff97bb652b874,PodSandboxId:5465db13b3322a19437892b4a03612b20a842b3e0d1583d21bda911dad3ff0b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172638
1907780581831,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2pshh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0443fc45-c95c-4fab-9dfe-a1b598ac6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2,PodSandboxId:14b4ae1ab9f1b40837daf07fd350cd8061ba56ec89554f5df3bf853ac9b4cc99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726381884397299457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2fb433-e07a-4c6e-8438-67625e0215a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e,PodSandboxId:b19df699e240af6c1f7248d381f4f7e7d3d3dabb3197c6c7f926b2633c740eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726381882734885829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d42kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df259178-5edc-4af0-97ba-206daeab8c29,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c,PodSandboxId:3090e56371ab7a7e97c0234d1b573ad0d6e27181cd3cda91d653a71adeffcb6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726381879932666993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldpsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b364d0-170c-491f-a76a-1a9aac8268d1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da,PodSandboxId:d1a7384c192cbd7bd1bb33d7c4dce95c22863640a4af9f472c6eed350f797cb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad9415
75eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726381867165201460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fabc84c41d45bf0e50c614cf9d14b6d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2,PodSandboxId:c91b2b79714717505cb7b61f0859fccc03a24fdaf9deba663b2fffc57f8ca95b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b
6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726381867173545366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5080c7122a700b4220ec74fbddc5b38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a,PodSandboxId:ddbb5486a2f5fb850ea89d0599da3f76bb5f29363310953da92d44468b188272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726381867159171775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3109ca0208e4bc37d3c2b041acc81270,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070,PodSandboxId:801081b18db2cd25819255b4e10919986e6dae4caee2ad1d6838680996adcaf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726381867152114781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1518c4b384a8cdb6b825f3767bc485,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=273d4fe1-353c-4be9-afd2-d64466e84bfd name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.352653328Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e37d30c0-6b53-49c0-b97e-5aae03ceada6 name=/runtime.v1.RuntimeService/Version
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.352876661Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e37d30c0-6b53-49c0-b97e-5aae03ceada6 name=/runtime.v1.RuntimeService/Version
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.354331634Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d555a9fd-a085-42b2-98c9-267ec33bd82c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.355823120Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382669355796076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d555a9fd-a085-42b2-98c9-267ec33bd82c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.356334337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48c12a74-4727-44b4-aac9-596b6af0ba08 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.356387061Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48c12a74-4727-44b4-aac9-596b6af0ba08 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.356897338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a998d8313e4f4b8762abf2436fe33923f8d0e8a3f48bf7e57874970e79a2f66,PodSandboxId:02a0dd44ead5240dab2e32893461095a2b4ff513c331788c3ef3b69a1c50782e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726382662510449826,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hbbg7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b923ad3-faa7-4dfa-8a1a-08ec7a851fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c6d745c3b5a950a7e3fbd43c2b7cfac54fea1278eba98b4dc91d0c78dc22af,PodSandboxId:56736db040b57431c2733303cf35ebe8f1fc747d2ab54ad0fa8f6dac00f4ba5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726382521302371284,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03db1b25-54f4-4882-85e5-a3edf2b37fd6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73,PodSandboxId:4805b7ff0a6b13511a7dffc70f7fc5eabcad01548e746b45625d996cf9586d5f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726381968716024437,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-g2rmd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c6416c0e-a192-4454-a335-5c49f36ea19b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f89afc4680145c4669218863eb9115877d1c2a0299b1adad8a304633834b036c,PodSandboxId:8489a665b46d5be7194ece239a3e351b4db93e93d45e4be66f6493e37801900f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948279506841,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-dd66v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e5a46bbd-02be-4c1f-aebb-00b53cf4c067,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20da8da7f0f5d980cd277a4df22df38e5e008aec108fbe15b44bf3378658b2a8,PodSandboxId:b5070610beb19bb8e2306348bcc578fc4045be505e52b56b7a20975f6dab4f8b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948158370782,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9mn4k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1086452a-e1cb-4387-bec2-242bcb5c68dc,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e762ef5d36b86f59e2d6a45c0c177b5566d12ec93c6b51499fcff97bb652b874,PodSandboxId:5465db13b3322a19437892b4a03612b20a842b3e0d1583d21bda911dad3ff0b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172638
1907780581831,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2pshh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0443fc45-c95c-4fab-9dfe-a1b598ac6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2,PodSandboxId:14b4ae1ab9f1b40837daf07fd350cd8061ba56ec89554f5df3bf853ac9b4cc99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726381884397299457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2fb433-e07a-4c6e-8438-67625e0215a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e,PodSandboxId:b19df699e240af6c1f7248d381f4f7e7d3d3dabb3197c6c7f926b2633c740eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726381882734885829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d42kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df259178-5edc-4af0-97ba-206daeab8c29,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c,PodSandboxId:3090e56371ab7a7e97c0234d1b573ad0d6e27181cd3cda91d653a71adeffcb6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726381879932666993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldpsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b364d0-170c-491f-a76a-1a9aac8268d1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da,PodSandboxId:d1a7384c192cbd7bd1bb33d7c4dce95c22863640a4af9f472c6eed350f797cb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad9415
75eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726381867165201460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fabc84c41d45bf0e50c614cf9d14b6d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2,PodSandboxId:c91b2b79714717505cb7b61f0859fccc03a24fdaf9deba663b2fffc57f8ca95b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b
6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726381867173545366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5080c7122a700b4220ec74fbddc5b38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a,PodSandboxId:ddbb5486a2f5fb850ea89d0599da3f76bb5f29363310953da92d44468b188272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726381867159171775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3109ca0208e4bc37d3c2b041acc81270,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070,PodSandboxId:801081b18db2cd25819255b4e10919986e6dae4caee2ad1d6838680996adcaf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726381867152114781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1518c4b384a8cdb6b825f3767bc485,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48c12a74-4727-44b4-aac9-596b6af0ba08 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.394080705Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c3497c7-7c4f-463f-881b-849327b5b6f8 name=/runtime.v1.RuntimeService/Version
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.394153653Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c3497c7-7c4f-463f-881b-849327b5b6f8 name=/runtime.v1.RuntimeService/Version
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.395878788Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06a069ec-3447-4e1c-bbdc-2dd8eca62c8e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.397429478Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382669397400149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06a069ec-3447-4e1c-bbdc-2dd8eca62c8e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.398091745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=547f0223-94f7-453e-9daf-e965fd144355 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.398147429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=547f0223-94f7-453e-9daf-e965fd144355 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.398561676Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a998d8313e4f4b8762abf2436fe33923f8d0e8a3f48bf7e57874970e79a2f66,PodSandboxId:02a0dd44ead5240dab2e32893461095a2b4ff513c331788c3ef3b69a1c50782e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726382662510449826,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hbbg7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b923ad3-faa7-4dfa-8a1a-08ec7a851fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c6d745c3b5a950a7e3fbd43c2b7cfac54fea1278eba98b4dc91d0c78dc22af,PodSandboxId:56736db040b57431c2733303cf35ebe8f1fc747d2ab54ad0fa8f6dac00f4ba5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726382521302371284,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03db1b25-54f4-4882-85e5-a3edf2b37fd6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73,PodSandboxId:4805b7ff0a6b13511a7dffc70f7fc5eabcad01548e746b45625d996cf9586d5f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726381968716024437,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-g2rmd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c6416c0e-a192-4454-a335-5c49f36ea19b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f89afc4680145c4669218863eb9115877d1c2a0299b1adad8a304633834b036c,PodSandboxId:8489a665b46d5be7194ece239a3e351b4db93e93d45e4be66f6493e37801900f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948279506841,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-dd66v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e5a46bbd-02be-4c1f-aebb-00b53cf4c067,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20da8da7f0f5d980cd277a4df22df38e5e008aec108fbe15b44bf3378658b2a8,PodSandboxId:b5070610beb19bb8e2306348bcc578fc4045be505e52b56b7a20975f6dab4f8b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948158370782,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9mn4k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1086452a-e1cb-4387-bec2-242bcb5c68dc,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e762ef5d36b86f59e2d6a45c0c177b5566d12ec93c6b51499fcff97bb652b874,PodSandboxId:5465db13b3322a19437892b4a03612b20a842b3e0d1583d21bda911dad3ff0b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172638
1907780581831,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2pshh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0443fc45-c95c-4fab-9dfe-a1b598ac6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2,PodSandboxId:14b4ae1ab9f1b40837daf07fd350cd8061ba56ec89554f5df3bf853ac9b4cc99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726381884397299457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2fb433-e07a-4c6e-8438-67625e0215a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e,PodSandboxId:b19df699e240af6c1f7248d381f4f7e7d3d3dabb3197c6c7f926b2633c740eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726381882734885829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d42kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df259178-5edc-4af0-97ba-206daeab8c29,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c,PodSandboxId:3090e56371ab7a7e97c0234d1b573ad0d6e27181cd3cda91d653a71adeffcb6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726381879932666993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldpsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b364d0-170c-491f-a76a-1a9aac8268d1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da,PodSandboxId:d1a7384c192cbd7bd1bb33d7c4dce95c22863640a4af9f472c6eed350f797cb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad9415
75eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726381867165201460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fabc84c41d45bf0e50c614cf9d14b6d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2,PodSandboxId:c91b2b79714717505cb7b61f0859fccc03a24fdaf9deba663b2fffc57f8ca95b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b
6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726381867173545366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5080c7122a700b4220ec74fbddc5b38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a,PodSandboxId:ddbb5486a2f5fb850ea89d0599da3f76bb5f29363310953da92d44468b188272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726381867159171775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3109ca0208e4bc37d3c2b041acc81270,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070,PodSandboxId:801081b18db2cd25819255b4e10919986e6dae4caee2ad1d6838680996adcaf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726381867152114781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1518c4b384a8cdb6b825f3767bc485,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=547f0223-94f7-453e-9daf-e965fd144355 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.438964224Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bde029f6-d9ba-4c06-bd16-938deb99b44c name=/runtime.v1.RuntimeService/Version
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.439046328Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bde029f6-d9ba-4c06-bd16-938deb99b44c name=/runtime.v1.RuntimeService/Version
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.440088262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6297ae00-16a6-4f78-828b-943f2aca7c93 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.441640422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382669441606232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6297ae00-16a6-4f78-828b-943f2aca7c93 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.442269791Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1a0d97a-a02c-4e26-ac59-b8beb9b6bac6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.442322729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1a0d97a-a02c-4e26-ac59-b8beb9b6bac6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:44:29 addons-368929 crio[662]: time="2024-09-15 06:44:29.442694234Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a998d8313e4f4b8762abf2436fe33923f8d0e8a3f48bf7e57874970e79a2f66,PodSandboxId:02a0dd44ead5240dab2e32893461095a2b4ff513c331788c3ef3b69a1c50782e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726382662510449826,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hbbg7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b923ad3-faa7-4dfa-8a1a-08ec7a851fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c6d745c3b5a950a7e3fbd43c2b7cfac54fea1278eba98b4dc91d0c78dc22af,PodSandboxId:56736db040b57431c2733303cf35ebe8f1fc747d2ab54ad0fa8f6dac00f4ba5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726382521302371284,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03db1b25-54f4-4882-85e5-a3edf2b37fd6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73,PodSandboxId:4805b7ff0a6b13511a7dffc70f7fc5eabcad01548e746b45625d996cf9586d5f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726381968716024437,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-g2rmd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c6416c0e-a192-4454-a335-5c49f36ea19b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f89afc4680145c4669218863eb9115877d1c2a0299b1adad8a304633834b036c,PodSandboxId:8489a665b46d5be7194ece239a3e351b4db93e93d45e4be66f6493e37801900f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948279506841,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-dd66v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e5a46bbd-02be-4c1f-aebb-00b53cf4c067,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20da8da7f0f5d980cd277a4df22df38e5e008aec108fbe15b44bf3378658b2a8,PodSandboxId:b5070610beb19bb8e2306348bcc578fc4045be505e52b56b7a20975f6dab4f8b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726381948158370782,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9mn4k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1086452a-e1cb-4387-bec2-242bcb5c68dc,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e762ef5d36b86f59e2d6a45c0c177b5566d12ec93c6b51499fcff97bb652b874,PodSandboxId:5465db13b3322a19437892b4a03612b20a842b3e0d1583d21bda911dad3ff0b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172638
1907780581831,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2pshh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0443fc45-c95c-4fab-9dfe-a1b598ac6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2,PodSandboxId:14b4ae1ab9f1b40837daf07fd350cd8061ba56ec89554f5df3bf853ac9b4cc99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726381884397299457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2fb433-e07a-4c6e-8438-67625e0215a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e,PodSandboxId:b19df699e240af6c1f7248d381f4f7e7d3d3dabb3197c6c7f926b2633c740eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726381882734885829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d42kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df259178-5edc-4af0-97ba-206daeab8c29,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c,PodSandboxId:3090e56371ab7a7e97c0234d1b573ad0d6e27181cd3cda91d653a71adeffcb6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726381879932666993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldpsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b364d0-170c-491f-a76a-1a9aac8268d1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da,PodSandboxId:d1a7384c192cbd7bd1bb33d7c4dce95c22863640a4af9f472c6eed350f797cb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad9415
75eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726381867165201460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fabc84c41d45bf0e50c614cf9d14b6d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2,PodSandboxId:c91b2b79714717505cb7b61f0859fccc03a24fdaf9deba663b2fffc57f8ca95b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b
6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726381867173545366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5080c7122a700b4220ec74fbddc5b38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a,PodSandboxId:ddbb5486a2f5fb850ea89d0599da3f76bb5f29363310953da92d44468b188272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726381867159171775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3109ca0208e4bc37d3c2b041acc81270,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070,PodSandboxId:801081b18db2cd25819255b4e10919986e6dae4caee2ad1d6838680996adcaf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726381867152114781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1518c4b384a8cdb6b825f3767bc485,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1a0d97a-a02c-4e26-ac59-b8beb9b6bac6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5a998d8313e4f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   02a0dd44ead52       hello-world-app-55bf9c44b4-hbbg7
	00c6d745c3b5a       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   56736db040b57       nginx
	af20c2eee64f4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   4805b7ff0a6b1       gcp-auth-89d5ffd79-g2rmd
	f89afc4680145       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              patch                     0                   8489a665b46d5       ingress-nginx-admission-patch-dd66v
	20da8da7f0f5d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   b5070610beb19       ingress-nginx-admission-create-9mn4k
	e762ef5d36b86       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   5465db13b3322       metrics-server-84c5f94fbc-2pshh
	522296a807289       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             13 minutes ago      Running             storage-provisioner       0                   14b4ae1ab9f1b       storage-provisioner
	0eaf92b0ac4cf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             13 minutes ago      Running             coredns                   0                   b19df699e240a       coredns-7c65d6cfc9-d42kz
	f44a755ad6406       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   3090e56371ab7       kube-proxy-ldpsk
	2d2c642ca90bf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   c91b2b7971471       etcd-addons-368929
	5278a91f04afe       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   d1a7384c192cb       kube-scheduler-addons-368929
	66eb2bd2d4313       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   ddbb5486a2f5f       kube-controller-manager-addons-368929
	0f00b1281db41       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   801081b18db2c       kube-apiserver-addons-368929
	
	
	==> coredns [0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e] <==
	[INFO] 10.244.0.7:60872 - 2697 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000372412s
	[INFO] 10.244.0.7:54481 - 63880 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000200669s
	[INFO] 10.244.0.7:54481 - 36493 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00014208s
	[INFO] 10.244.0.7:58760 - 443 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090787s
	[INFO] 10.244.0.7:58760 - 23481 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088177s
	[INFO] 10.244.0.7:48535 - 47705 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000271192s
	[INFO] 10.244.0.7:48535 - 54567 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083036s
	[INFO] 10.244.0.7:42330 - 4731 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138141s
	[INFO] 10.244.0.7:42330 - 6517 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000092133s
	[INFO] 10.244.0.7:47964 - 26953 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000283443s
	[INFO] 10.244.0.7:47964 - 19270 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142949s
	[INFO] 10.244.0.7:49955 - 21487 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137257s
	[INFO] 10.244.0.7:49955 - 61676 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095206s
	[INFO] 10.244.0.7:38355 - 23195 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000252309s
	[INFO] 10.244.0.7:38355 - 62100 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060261s
	[INFO] 10.244.0.7:43701 - 7554 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000161529s
	[INFO] 10.244.0.7:43701 - 65420 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000048462s
	[INFO] 10.244.0.22:50845 - 48293 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000496971s
	[INFO] 10.244.0.22:56694 - 7666 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000106022s
	[INFO] 10.244.0.22:53136 - 48746 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122296s
	[INFO] 10.244.0.22:43399 - 31030 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149247s
	[INFO] 10.244.0.22:48872 - 36794 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000141706s
	[INFO] 10.244.0.22:38135 - 52360 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121673s
	[INFO] 10.244.0.22:39775 - 36027 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000834966s
	[INFO] 10.244.0.22:40967 - 58177 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.00127761s
	
	
	==> describe nodes <==
	Name:               addons-368929
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-368929
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=addons-368929
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_31_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-368929
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:31:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-368929
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:44:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:42:14 +0000   Sun, 15 Sep 2024 06:31:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:42:14 +0000   Sun, 15 Sep 2024 06:31:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:42:14 +0000   Sun, 15 Sep 2024 06:31:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:42:14 +0000   Sun, 15 Sep 2024 06:31:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    addons-368929
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6b3f2f71dbb42e29461dbb3bd421d93
	  System UUID:                a6b3f2f7-1dbb-42e2-9461-dbb3bd421d93
	  Boot ID:                    da80a0da-5697-4701-b6a4-39271e495e6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-hbbg7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  gcp-auth                    gcp-auth-89d5ffd79-g2rmd                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-d42kz                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-368929                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-368929             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-368929    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-ldpsk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-368929             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-2pshh          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-368929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-368929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-368929 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m   kubelet          Node addons-368929 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node addons-368929 event: Registered Node addons-368929 in Controller
	
	
	==> dmesg <==
	[  +6.450745] kauditd_printk_skb: 66 callbacks suppressed
	[ +17.481041] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.378291] kauditd_printk_skb: 32 callbacks suppressed
	[Sep15 06:32] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.107181] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.597636] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.061750] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.516101] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.544554] kauditd_printk_skb: 47 callbacks suppressed
	[Sep15 06:34] kauditd_printk_skb: 28 callbacks suppressed
	[Sep15 06:35] kauditd_printk_skb: 28 callbacks suppressed
	[Sep15 06:38] kauditd_printk_skb: 28 callbacks suppressed
	[Sep15 06:40] kauditd_printk_skb: 28 callbacks suppressed
	[Sep15 06:41] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.724123] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.326898] kauditd_printk_skb: 10 callbacks suppressed
	[  +8.150758] kauditd_printk_skb: 46 callbacks suppressed
	[  +8.170813] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.523089] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.886920] kauditd_printk_skb: 23 callbacks suppressed
	[  +8.541296] kauditd_printk_skb: 33 callbacks suppressed
	[Sep15 06:42] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.212644] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.696206] kauditd_printk_skb: 32 callbacks suppressed
	[Sep15 06:44] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2] <==
	{"level":"warn","ts":"2024-09-15T06:32:38.969972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.971025ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:32:38.969993Z","caller":"traceutil/trace.go:171","msg":"trace[1761732918] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1081; }","duration":"272.991379ms","start":"2024-09-15T06:32:38.696997Z","end":"2024-09-15T06:32:38.969988Z","steps":["trace[1761732918] 'agreement among raft nodes before linearized reading'  (duration: 272.960931ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:32:38.970074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.201553ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:32:38.970093Z","caller":"traceutil/trace.go:171","msg":"trace[235109500] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1081; }","duration":"192.218119ms","start":"2024-09-15T06:32:38.777867Z","end":"2024-09-15T06:32:38.970085Z","steps":["trace[235109500] 'agreement among raft nodes before linearized reading'  (duration: 192.18886ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:32:41.779440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.051742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:32:41.782084Z","caller":"traceutil/trace.go:171","msg":"trace[1975915837] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1088; }","duration":"281.712677ms","start":"2024-09-15T06:32:41.500363Z","end":"2024-09-15T06:32:41.782076Z","steps":["trace[1975915837] 'range keys from in-memory index tree'  (duration: 279.004355ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:32:41.781563Z","caller":"traceutil/trace.go:171","msg":"trace[25339734] linearizableReadLoop","detail":"{readStateIndex:1122; appliedIndex:1121; }","duration":"133.379421ms","start":"2024-09-15T06:32:41.648165Z","end":"2024-09-15T06:32:41.781545Z","steps":["trace[25339734] 'read index received'  (duration: 126.703432ms)","trace[25339734] 'applied index is now lower than readState.Index'  (duration: 6.675186ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:32:41.781804Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.625248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-09-15T06:32:41.781933Z","caller":"traceutil/trace.go:171","msg":"trace[1390778342] transaction","detail":"{read_only:false; response_revision:1089; number_of_response:1; }","duration":"155.371511ms","start":"2024-09-15T06:32:41.626549Z","end":"2024-09-15T06:32:41.781921Z","steps":["trace[1390778342] 'process raft request'  (duration: 148.372455ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:32:41.783606Z","caller":"traceutil/trace.go:171","msg":"trace[325349942] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1089; }","duration":"135.434363ms","start":"2024-09-15T06:32:41.648161Z","end":"2024-09-15T06:32:41.783595Z","steps":["trace[325349942] 'agreement among raft nodes before linearized reading'  (duration: 133.430374ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:32:41.783629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.612501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:32:41.783886Z","caller":"traceutil/trace.go:171","msg":"trace[808376363] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1089; }","duration":"100.870711ms","start":"2024-09-15T06:32:41.683006Z","end":"2024-09-15T06:32:41.783876Z","steps":["trace[808376363] 'agreement among raft nodes before linearized reading'  (duration: 100.588865ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:32:41.786261Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.155485ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.212\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-09-15T06:32:41.787603Z","caller":"traceutil/trace.go:171","msg":"trace[32333181] range","detail":"{range_begin:/registry/masterleases/192.168.39.212; range_end:; response_count:1; response_revision:1089; }","duration":"104.495783ms","start":"2024-09-15T06:32:41.683094Z","end":"2024-09-15T06:32:41.787590Z","steps":["trace[32333181] 'agreement among raft nodes before linearized reading'  (duration: 103.092831ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:32:53.500272Z","caller":"traceutil/trace.go:171","msg":"trace[423926321] linearizableReadLoop","detail":"{readStateIndex:1189; appliedIndex:1188; }","duration":"222.338479ms","start":"2024-09-15T06:32:53.277918Z","end":"2024-09-15T06:32:53.500256Z","steps":["trace[423926321] 'read index received'  (duration: 222.103394ms)","trace[423926321] 'applied index is now lower than readState.Index'  (duration: 234.479µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:32:53.500508Z","caller":"traceutil/trace.go:171","msg":"trace[865342865] transaction","detail":"{read_only:false; response_revision:1153; number_of_response:1; }","duration":"458.921949ms","start":"2024-09-15T06:32:53.041572Z","end":"2024-09-15T06:32:53.500494Z","steps":["trace[865342865] 'process raft request'  (duration: 458.504383ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:32:53.500612Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-15T06:32:53.041556Z","time spent":"459.003891ms","remote":"127.0.0.1:38690","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1144 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-15T06:32:53.500512Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.59145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:32:53.500848Z","caller":"traceutil/trace.go:171","msg":"trace[2102283515] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1153; }","duration":"222.946871ms","start":"2024-09-15T06:32:53.277893Z","end":"2024-09-15T06:32:53.500839Z","steps":["trace[2102283515] 'agreement among raft nodes before linearized reading'  (duration: 222.546912ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:41:08.543557Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1522}
	{"level":"info","ts":"2024-09-15T06:41:08.573071Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1522,"took":"28.970485ms","hash":4115302871,"current-db-size-bytes":6864896,"current-db-size":"6.9 MB","current-db-size-in-use-bytes":3567616,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-15T06:41:08.573176Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4115302871,"revision":1522,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-15T06:41:20.310795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.008412ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:41:20.310906Z","caller":"traceutil/trace.go:171","msg":"trace[1098128179] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2119; }","duration":"220.206928ms","start":"2024-09-15T06:41:20.090684Z","end":"2024-09-15T06:41:20.310891Z","steps":["trace[1098128179] 'range keys from in-memory index tree'  (duration: 219.905648ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:42:34.551551Z","caller":"traceutil/trace.go:171","msg":"trace[1509624691] transaction","detail":"{read_only:false; response_revision:2532; number_of_response:1; }","duration":"185.999699ms","start":"2024-09-15T06:42:34.365512Z","end":"2024-09-15T06:42:34.551511Z","steps":["trace[1509624691] 'process raft request'  (duration: 185.794249ms)"],"step_count":1}
	
	
	==> gcp-auth [af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73] <==
	2024/09/15 06:32:56 Ready to write response ...
	2024/09/15 06:40:59 Ready to marshal response ...
	2024/09/15 06:40:59 Ready to write response ...
	2024/09/15 06:40:59 Ready to marshal response ...
	2024/09/15 06:40:59 Ready to write response ...
	2024/09/15 06:41:07 Ready to marshal response ...
	2024/09/15 06:41:07 Ready to write response ...
	2024/09/15 06:41:09 Ready to marshal response ...
	2024/09/15 06:41:09 Ready to write response ...
	2024/09/15 06:41:12 Ready to marshal response ...
	2024/09/15 06:41:12 Ready to write response ...
	2024/09/15 06:41:17 Ready to marshal response ...
	2024/09/15 06:41:17 Ready to write response ...
	2024/09/15 06:41:17 Ready to marshal response ...
	2024/09/15 06:41:17 Ready to write response ...
	2024/09/15 06:41:17 Ready to marshal response ...
	2024/09/15 06:41:17 Ready to write response ...
	2024/09/15 06:41:40 Ready to marshal response ...
	2024/09/15 06:41:40 Ready to write response ...
	2024/09/15 06:41:56 Ready to marshal response ...
	2024/09/15 06:41:56 Ready to write response ...
	2024/09/15 06:42:01 Ready to marshal response ...
	2024/09/15 06:42:01 Ready to write response ...
	2024/09/15 06:44:19 Ready to marshal response ...
	2024/09/15 06:44:19 Ready to write response ...
	
	
	==> kernel <==
	 06:44:29 up 13 min,  0 users,  load average: 0.20, 0.43, 0.41
	Linux addons-368929 5.10.207 #1 SMP Sun Sep 15 04:48:27 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070] <==
	I0915 06:41:22.185630       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0915 06:41:28.694030       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:37.562228       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:38.571198       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:39.581372       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:40.597002       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:41.607613       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:42.617845       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:43.624836       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0915 06:41:55.500605       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:55.500667       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:55.520225       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:55.520373       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:55.561239       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:55.561354       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:55.644936       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:55.645049       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0915 06:41:56.582005       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0915 06:41:56.645832       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0915 06:41:56.728173       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	W0915 06:41:56.736990       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0915 06:41:56.920189       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.144.22"}
	I0915 06:42:12.493514       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0915 06:42:13.626888       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0915 06:44:19.590482       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.68.226"}
	
	
	==> kube-controller-manager [66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a] <==
	E0915 06:42:52.056139       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:43:03.606541       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:03.606760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:43:16.774158       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:16.774279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:43:24.650986       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:24.651113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:43:36.086359       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:36.086452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:44:01.062328       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:44:01.062510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:44:08.745116       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:44:08.745281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:44:10.742911       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:44:10.743027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:44:15.845936       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:44:15.846013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:44:19.436101       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="56.654789ms"
	I0915 06:44:19.459476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="23.318243ms"
	I0915 06:44:19.459557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="29.466µs"
	I0915 06:44:21.420963       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0915 06:44:21.423652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="6.378µs"
	I0915 06:44:21.430426       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0915 06:44:22.823951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.045671ms"
	I0915 06:44:22.825023       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="52.279µs"
	
	
	==> kube-proxy [f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 06:31:21.923821       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 06:31:22.107135       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.212"]
	E0915 06:31:22.107232       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:31:22.469316       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 06:31:22.469382       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 06:31:22.469406       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:31:22.502604       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:31:22.502994       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:31:22.503027       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:31:22.513405       1 config.go:199] "Starting service config controller"
	I0915 06:31:22.517572       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:31:22.517677       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:31:22.517749       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:31:22.524073       1 config.go:328] "Starting node config controller"
	I0915 06:31:22.524163       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:31:22.617837       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:31:22.617902       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:31:22.624325       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da] <==
	W0915 06:31:10.099747       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 06:31:10.099810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:10.971911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 06:31:10.972055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:10.992635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:31:10.992769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.044975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 06:31:11.045071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.099337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 06:31:11.099564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.127086       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 06:31:11.127389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.176096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:31:11.176240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.192815       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:31:11.193073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.242830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:31:11.242950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.291677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:31:11.291812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.319296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 06:31:11.319464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.333004       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:31:11.333054       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0915 06:31:13.689226       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 06:44:19 addons-368929 kubelet[1209]: I0915 06:44:19.563755    1209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2b923ad3-faa7-4dfa-8a1a-08ec7a851fbf-gcp-creds\") pod \"hello-world-app-55bf9c44b4-hbbg7\" (UID: \"2b923ad3-faa7-4dfa-8a1a-08ec7a851fbf\") " pod="default/hello-world-app-55bf9c44b4-hbbg7"
	Sep 15 06:44:20 addons-368929 kubelet[1209]: I0915 06:44:20.675808    1209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59x4r\" (UniqueName: \"kubernetes.io/projected/ba1fa65c-7021-4ddf-a816-9f840f28af7d-kube-api-access-59x4r\") pod \"ba1fa65c-7021-4ddf-a816-9f840f28af7d\" (UID: \"ba1fa65c-7021-4ddf-a816-9f840f28af7d\") "
	Sep 15 06:44:20 addons-368929 kubelet[1209]: I0915 06:44:20.677911    1209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1fa65c-7021-4ddf-a816-9f840f28af7d-kube-api-access-59x4r" (OuterVolumeSpecName: "kube-api-access-59x4r") pod "ba1fa65c-7021-4ddf-a816-9f840f28af7d" (UID: "ba1fa65c-7021-4ddf-a816-9f840f28af7d"). InnerVolumeSpecName "kube-api-access-59x4r". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:44:20 addons-368929 kubelet[1209]: I0915 06:44:20.776089    1209 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-59x4r\" (UniqueName: \"kubernetes.io/projected/ba1fa65c-7021-4ddf-a816-9f840f28af7d-kube-api-access-59x4r\") on node \"addons-368929\" DevicePath \"\""
	Sep 15 06:44:20 addons-368929 kubelet[1209]: I0915 06:44:20.776102    1209 scope.go:117] "RemoveContainer" containerID="45670c34a0e6710c7a612c9ed83b68dbe0411ef0f267f64c17fb06f947b4c70e"
	Sep 15 06:44:20 addons-368929 kubelet[1209]: I0915 06:44:20.807269    1209 scope.go:117] "RemoveContainer" containerID="45670c34a0e6710c7a612c9ed83b68dbe0411ef0f267f64c17fb06f947b4c70e"
	Sep 15 06:44:20 addons-368929 kubelet[1209]: E0915 06:44:20.807966    1209 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45670c34a0e6710c7a612c9ed83b68dbe0411ef0f267f64c17fb06f947b4c70e\": container with ID starting with 45670c34a0e6710c7a612c9ed83b68dbe0411ef0f267f64c17fb06f947b4c70e not found: ID does not exist" containerID="45670c34a0e6710c7a612c9ed83b68dbe0411ef0f267f64c17fb06f947b4c70e"
	Sep 15 06:44:20 addons-368929 kubelet[1209]: I0915 06:44:20.807999    1209 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45670c34a0e6710c7a612c9ed83b68dbe0411ef0f267f64c17fb06f947b4c70e"} err="failed to get container status \"45670c34a0e6710c7a612c9ed83b68dbe0411ef0f267f64c17fb06f947b4c70e\": rpc error: code = NotFound desc = could not find container \"45670c34a0e6710c7a612c9ed83b68dbe0411ef0f267f64c17fb06f947b4c70e\": container with ID starting with 45670c34a0e6710c7a612c9ed83b68dbe0411ef0f267f64c17fb06f947b4c70e not found: ID does not exist"
	Sep 15 06:44:22 addons-368929 kubelet[1209]: I0915 06:44:22.531001    1209 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1086452a-e1cb-4387-bec2-242bcb5c68dc" path="/var/lib/kubelet/pods/1086452a-e1cb-4387-bec2-242bcb5c68dc/volumes"
	Sep 15 06:44:22 addons-368929 kubelet[1209]: I0915 06:44:22.531419    1209 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1fa65c-7021-4ddf-a816-9f840f28af7d" path="/var/lib/kubelet/pods/ba1fa65c-7021-4ddf-a816-9f840f28af7d/volumes"
	Sep 15 06:44:22 addons-368929 kubelet[1209]: I0915 06:44:22.531865    1209 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a46bbd-02be-4c1f-aebb-00b53cf4c067" path="/var/lib/kubelet/pods/e5a46bbd-02be-4c1f-aebb-00b53cf4c067/volumes"
	Sep 15 06:44:22 addons-368929 kubelet[1209]: E0915 06:44:22.963763    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382662963337076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:44:22 addons-368929 kubelet[1209]: E0915 06:44:22.963811    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382662963337076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:44:24 addons-368929 kubelet[1209]: I0915 06:44:24.712474    1209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14d82e54-1bb1-43c4-8e4d-d47f81096940-webhook-cert\") pod \"14d82e54-1bb1-43c4-8e4d-d47f81096940\" (UID: \"14d82e54-1bb1-43c4-8e4d-d47f81096940\") "
	Sep 15 06:44:24 addons-368929 kubelet[1209]: I0915 06:44:24.712539    1209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktkv2\" (UniqueName: \"kubernetes.io/projected/14d82e54-1bb1-43c4-8e4d-d47f81096940-kube-api-access-ktkv2\") pod \"14d82e54-1bb1-43c4-8e4d-d47f81096940\" (UID: \"14d82e54-1bb1-43c4-8e4d-d47f81096940\") "
	Sep 15 06:44:24 addons-368929 kubelet[1209]: I0915 06:44:24.716203    1209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d82e54-1bb1-43c4-8e4d-d47f81096940-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "14d82e54-1bb1-43c4-8e4d-d47f81096940" (UID: "14d82e54-1bb1-43c4-8e4d-d47f81096940"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 15 06:44:24 addons-368929 kubelet[1209]: I0915 06:44:24.716817    1209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14d82e54-1bb1-43c4-8e4d-d47f81096940-kube-api-access-ktkv2" (OuterVolumeSpecName: "kube-api-access-ktkv2") pod "14d82e54-1bb1-43c4-8e4d-d47f81096940" (UID: "14d82e54-1bb1-43c4-8e4d-d47f81096940"). InnerVolumeSpecName "kube-api-access-ktkv2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:44:24 addons-368929 kubelet[1209]: I0915 06:44:24.809645    1209 scope.go:117] "RemoveContainer" containerID="8b40c9a7366b024e571602d70abd817b4a21a408312f385d0e1574646576b528"
	Sep 15 06:44:24 addons-368929 kubelet[1209]: I0915 06:44:24.813851    1209 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14d82e54-1bb1-43c4-8e4d-d47f81096940-webhook-cert\") on node \"addons-368929\" DevicePath \"\""
	Sep 15 06:44:24 addons-368929 kubelet[1209]: I0915 06:44:24.813887    1209 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ktkv2\" (UniqueName: \"kubernetes.io/projected/14d82e54-1bb1-43c4-8e4d-d47f81096940-kube-api-access-ktkv2\") on node \"addons-368929\" DevicePath \"\""
	Sep 15 06:44:24 addons-368929 kubelet[1209]: I0915 06:44:24.830647    1209 scope.go:117] "RemoveContainer" containerID="8b40c9a7366b024e571602d70abd817b4a21a408312f385d0e1574646576b528"
	Sep 15 06:44:24 addons-368929 kubelet[1209]: E0915 06:44:24.831266    1209 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b40c9a7366b024e571602d70abd817b4a21a408312f385d0e1574646576b528\": container with ID starting with 8b40c9a7366b024e571602d70abd817b4a21a408312f385d0e1574646576b528 not found: ID does not exist" containerID="8b40c9a7366b024e571602d70abd817b4a21a408312f385d0e1574646576b528"
	Sep 15 06:44:24 addons-368929 kubelet[1209]: I0915 06:44:24.831292    1209 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b40c9a7366b024e571602d70abd817b4a21a408312f385d0e1574646576b528"} err="failed to get container status \"8b40c9a7366b024e571602d70abd817b4a21a408312f385d0e1574646576b528\": rpc error: code = NotFound desc = could not find container \"8b40c9a7366b024e571602d70abd817b4a21a408312f385d0e1574646576b528\": container with ID starting with 8b40c9a7366b024e571602d70abd817b4a21a408312f385d0e1574646576b528 not found: ID does not exist"
	Sep 15 06:44:26 addons-368929 kubelet[1209]: I0915 06:44:26.528489    1209 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14d82e54-1bb1-43c4-8e4d-d47f81096940" path="/var/lib/kubelet/pods/14d82e54-1bb1-43c4-8e4d-d47f81096940/volumes"
	Sep 15 06:44:28 addons-368929 kubelet[1209]: E0915 06:44:28.527368    1209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c8076028-6672-48b6-8085-14b06a0a0268"
	
	
	==> storage-provisioner [522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2] <==
	I0915 06:31:26.557262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:31:26.648171       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:31:26.648246       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:31:26.724533       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:31:26.725105       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99973a3d-83c9-43fb-b77d-d8ca8d8c9277", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-368929_2b5bab70-53fb-4236-bcbd-c12d04df3962 became leader
	I0915 06:31:26.729461       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-368929_2b5bab70-53fb-4236-bcbd-c12d04df3962!
	I0915 06:31:26.839908       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-368929_2b5bab70-53fb-4236-bcbd-c12d04df3962!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-368929 -n addons-368929
helpers_test.go:261: (dbg) Run:  kubectl --context addons-368929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-368929 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-368929 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-368929/192.168.39.212
	Start Time:       Sun, 15 Sep 2024 06:32:56 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rz99b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rz99b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-368929
	  Normal   Pulling    10m (x4 over 11m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)    kubelet            Error: ErrImagePull
	  Warning  Failed     9m48s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    93s (x43 over 11m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.20s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (327.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.453635ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-2pshh" [0443fc45-c95c-4fab-9dfe-a1b598ac6c8b] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006975605s
addons_test.go:417: (dbg) Run:  kubectl --context addons-368929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-368929 top pods -n kube-system: exit status 1 (109.983171ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-d42kz, age: 10m31.829929577s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-368929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-368929 top pods -n kube-system: exit status 1 (77.399194ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-d42kz, age: 10m36.005238785s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-368929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-368929 top pods -n kube-system: exit status 1 (62.342995ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-d42kz, age: 10m41.468951735s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-368929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-368929 top pods -n kube-system: exit status 1 (76.505986ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-d42kz, age: 10m47.930896406s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-368929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-368929 top pods -n kube-system: exit status 1 (69.273811ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-d42kz, age: 10m59.311145635s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-368929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-368929 top pods -n kube-system: exit status 1 (62.437079ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-d42kz, age: 11m10.684455463s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-368929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-368929 top pods -n kube-system: exit status 1 (63.301554ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-d42kz, age: 11m42.218813422s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-368929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-368929 top pods -n kube-system: exit status 1 (62.795749ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-d42kz, age: 12m22.796454314s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-368929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-368929 top pods -n kube-system: exit status 1 (64.883019ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-d42kz, age: 12m50.715132336s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-368929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-368929 top pods -n kube-system: exit status 1 (61.926582ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-d42kz, age: 13m49.791833947s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-368929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-368929 top pods -n kube-system: exit status 1 (61.308926ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-d42kz, age: 14m32.004677199s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-368929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-368929 top pods -n kube-system: exit status 1 (62.241094ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-d42kz, age: 15m50.983835412s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-368929 -n addons-368929
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-368929 logs -n 25: (1.352403348s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-832723                                                                     | download-only-832723 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| delete  | -p download-only-119130                                                                     | download-only-119130 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-702457 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | binary-mirror-702457                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37011                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-702457                                                                     | binary-mirror-702457 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| addons  | enable dashboard -p                                                                         | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | addons-368929                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | addons-368929                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-368929 --wait=true                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:32 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-368929 ssh cat                                                                       | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | /opt/local-path-provisioner/pvc-37b863f6-d527-401f-89ba-956f4262c0c9_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | addons-368929                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | -p addons-368929                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | -p addons-368929                                                                            |                      |         |         |                     |                     |
	| addons  | addons-368929 addons                                                                        | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-368929 addons                                                                        | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-368929 ssh curl -s                                                                   | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-368929 ip                                                                            | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	|         | addons-368929                                                                               |                      |         |         |                     |                     |
	| ip      | addons-368929 ip                                                                            | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:44 UTC | 15 Sep 24 06:44 UTC |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:44 UTC | 15 Sep 24 06:44 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-368929 addons disable                                                                | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:44 UTC | 15 Sep 24 06:44 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-368929 addons                                                                        | addons-368929        | jenkins | v1.34.0 | 15 Sep 24 06:47 UTC | 15 Sep 24 06:47 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:30:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:30:34.502587   13942 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:30:34.502678   13942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:30:34.502685   13942 out.go:358] Setting ErrFile to fd 2...
	I0915 06:30:34.502689   13942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:30:34.502874   13942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 06:30:34.503472   13942 out.go:352] Setting JSON to false
	I0915 06:30:34.504273   13942 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":780,"bootTime":1726381054,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:30:34.504369   13942 start.go:139] virtualization: kvm guest
	I0915 06:30:34.507106   13942 out.go:177] * [addons-368929] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 06:30:34.508386   13942 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:30:34.508405   13942 notify.go:220] Checking for updates...
	I0915 06:30:34.511198   13942 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:30:34.512524   13942 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 06:30:34.513658   13942 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 06:30:34.514857   13942 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:30:34.515998   13942 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:30:34.517110   13942 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:30:34.547737   13942 out.go:177] * Using the kvm2 driver based on user configuration
	I0915 06:30:34.548792   13942 start.go:297] selected driver: kvm2
	I0915 06:30:34.548818   13942 start.go:901] validating driver "kvm2" against <nil>
	I0915 06:30:34.548833   13942 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:30:34.549511   13942 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:30:34.549598   13942 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 06:30:34.563630   13942 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 06:30:34.563667   13942 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:30:34.563907   13942 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:30:34.563939   13942 cni.go:84] Creating CNI manager for ""
	I0915 06:30:34.563977   13942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 06:30:34.563985   13942 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 06:30:34.564028   13942 start.go:340] cluster config:
	{Name:addons-368929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:30:34.564113   13942 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:30:34.565784   13942 out.go:177] * Starting "addons-368929" primary control-plane node in "addons-368929" cluster
	I0915 06:30:34.566926   13942 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:30:34.566954   13942 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 06:30:34.566963   13942 cache.go:56] Caching tarball of preloaded images
	I0915 06:30:34.567049   13942 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 06:30:34.567062   13942 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 06:30:34.567364   13942 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/config.json ...
	I0915 06:30:34.567385   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/config.json: {Name:mk52f636c4ede8c4dfee1d713e4fd97fe830cfd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:34.567522   13942 start.go:360] acquireMachinesLock for addons-368929: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 06:30:34.567577   13942 start.go:364] duration metric: took 39.328µs to acquireMachinesLock for "addons-368929"
	I0915 06:30:34.567599   13942 start.go:93] Provisioning new machine with config: &{Name:addons-368929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:30:34.567665   13942 start.go:125] createHost starting for "" (driver="kvm2")
	I0915 06:30:34.569232   13942 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0915 06:30:34.569343   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:30:34.569382   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:30:34.583188   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I0915 06:30:34.583668   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:30:34.584246   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:30:34.584267   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:30:34.584599   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:30:34.584752   13942 main.go:141] libmachine: (addons-368929) Calling .GetMachineName
	I0915 06:30:34.584884   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:34.585061   13942 start.go:159] libmachine.API.Create for "addons-368929" (driver="kvm2")
	I0915 06:30:34.585092   13942 client.go:168] LocalClient.Create starting
	I0915 06:30:34.585134   13942 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem
	I0915 06:30:34.864190   13942 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem
	I0915 06:30:35.049893   13942 main.go:141] libmachine: Running pre-create checks...
	I0915 06:30:35.049914   13942 main.go:141] libmachine: (addons-368929) Calling .PreCreateCheck
	I0915 06:30:35.050423   13942 main.go:141] libmachine: (addons-368929) Calling .GetConfigRaw
	I0915 06:30:35.050849   13942 main.go:141] libmachine: Creating machine...
	I0915 06:30:35.050864   13942 main.go:141] libmachine: (addons-368929) Calling .Create
	I0915 06:30:35.051026   13942 main.go:141] libmachine: (addons-368929) Creating KVM machine...
	I0915 06:30:35.052240   13942 main.go:141] libmachine: (addons-368929) DBG | found existing default KVM network
	I0915 06:30:35.052972   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.052837   13964 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I0915 06:30:35.053018   13942 main.go:141] libmachine: (addons-368929) DBG | created network xml: 
	I0915 06:30:35.053051   13942 main.go:141] libmachine: (addons-368929) DBG | <network>
	I0915 06:30:35.053059   13942 main.go:141] libmachine: (addons-368929) DBG |   <name>mk-addons-368929</name>
	I0915 06:30:35.053064   13942 main.go:141] libmachine: (addons-368929) DBG |   <dns enable='no'/>
	I0915 06:30:35.053070   13942 main.go:141] libmachine: (addons-368929) DBG |   
	I0915 06:30:35.053076   13942 main.go:141] libmachine: (addons-368929) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0915 06:30:35.053085   13942 main.go:141] libmachine: (addons-368929) DBG |     <dhcp>
	I0915 06:30:35.053090   13942 main.go:141] libmachine: (addons-368929) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0915 06:30:35.053095   13942 main.go:141] libmachine: (addons-368929) DBG |     </dhcp>
	I0915 06:30:35.053099   13942 main.go:141] libmachine: (addons-368929) DBG |   </ip>
	I0915 06:30:35.053104   13942 main.go:141] libmachine: (addons-368929) DBG |   
	I0915 06:30:35.053114   13942 main.go:141] libmachine: (addons-368929) DBG | </network>
	I0915 06:30:35.053144   13942 main.go:141] libmachine: (addons-368929) DBG | 
	I0915 06:30:35.058552   13942 main.go:141] libmachine: (addons-368929) DBG | trying to create private KVM network mk-addons-368929 192.168.39.0/24...
	I0915 06:30:35.121581   13942 main.go:141] libmachine: (addons-368929) Setting up store path in /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929 ...
	I0915 06:30:35.121603   13942 main.go:141] libmachine: (addons-368929) DBG | private KVM network mk-addons-368929 192.168.39.0/24 created
	I0915 06:30:35.121625   13942 main.go:141] libmachine: (addons-368929) Building disk image from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso
	I0915 06:30:35.121656   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.121548   13964 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 06:30:35.121742   13942 main.go:141] libmachine: (addons-368929) Downloading /home/jenkins/minikube-integration/19644-6166/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso...
	I0915 06:30:35.379116   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.378937   13964 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa...
	I0915 06:30:35.512593   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.512453   13964 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/addons-368929.rawdisk...
	I0915 06:30:35.512623   13942 main.go:141] libmachine: (addons-368929) DBG | Writing magic tar header
	I0915 06:30:35.512637   13942 main.go:141] libmachine: (addons-368929) DBG | Writing SSH key tar header
	I0915 06:30:35.512649   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:35.512598   13964 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929 ...
	I0915 06:30:35.512682   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929
	I0915 06:30:35.512720   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines
	I0915 06:30:35.512748   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 06:30:35.512761   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929 (perms=drwx------)
	I0915 06:30:35.512770   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166
	I0915 06:30:35.512782   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0915 06:30:35.512789   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home/jenkins
	I0915 06:30:35.512796   13942 main.go:141] libmachine: (addons-368929) DBG | Checking permissions on dir: /home
	I0915 06:30:35.512802   13942 main.go:141] libmachine: (addons-368929) DBG | Skipping /home - not owner
	I0915 06:30:35.512811   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines (perms=drwxr-xr-x)
	I0915 06:30:35.512824   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube (perms=drwxr-xr-x)
	I0915 06:30:35.512862   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166 (perms=drwxrwxr-x)
	I0915 06:30:35.512879   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0915 06:30:35.512887   13942 main.go:141] libmachine: (addons-368929) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0915 06:30:35.512892   13942 main.go:141] libmachine: (addons-368929) Creating domain...
	I0915 06:30:35.513950   13942 main.go:141] libmachine: (addons-368929) define libvirt domain using xml: 
	I0915 06:30:35.513976   13942 main.go:141] libmachine: (addons-368929) <domain type='kvm'>
	I0915 06:30:35.513987   13942 main.go:141] libmachine: (addons-368929)   <name>addons-368929</name>
	I0915 06:30:35.513996   13942 main.go:141] libmachine: (addons-368929)   <memory unit='MiB'>4000</memory>
	I0915 06:30:35.514006   13942 main.go:141] libmachine: (addons-368929)   <vcpu>2</vcpu>
	I0915 06:30:35.514012   13942 main.go:141] libmachine: (addons-368929)   <features>
	I0915 06:30:35.514017   13942 main.go:141] libmachine: (addons-368929)     <acpi/>
	I0915 06:30:35.514020   13942 main.go:141] libmachine: (addons-368929)     <apic/>
	I0915 06:30:35.514025   13942 main.go:141] libmachine: (addons-368929)     <pae/>
	I0915 06:30:35.514029   13942 main.go:141] libmachine: (addons-368929)     
	I0915 06:30:35.514034   13942 main.go:141] libmachine: (addons-368929)   </features>
	I0915 06:30:35.514040   13942 main.go:141] libmachine: (addons-368929)   <cpu mode='host-passthrough'>
	I0915 06:30:35.514045   13942 main.go:141] libmachine: (addons-368929)   
	I0915 06:30:35.514052   13942 main.go:141] libmachine: (addons-368929)   </cpu>
	I0915 06:30:35.514057   13942 main.go:141] libmachine: (addons-368929)   <os>
	I0915 06:30:35.514063   13942 main.go:141] libmachine: (addons-368929)     <type>hvm</type>
	I0915 06:30:35.514068   13942 main.go:141] libmachine: (addons-368929)     <boot dev='cdrom'/>
	I0915 06:30:35.514074   13942 main.go:141] libmachine: (addons-368929)     <boot dev='hd'/>
	I0915 06:30:35.514079   13942 main.go:141] libmachine: (addons-368929)     <bootmenu enable='no'/>
	I0915 06:30:35.514087   13942 main.go:141] libmachine: (addons-368929)   </os>
	I0915 06:30:35.514123   13942 main.go:141] libmachine: (addons-368929)   <devices>
	I0915 06:30:35.514143   13942 main.go:141] libmachine: (addons-368929)     <disk type='file' device='cdrom'>
	I0915 06:30:35.514158   13942 main.go:141] libmachine: (addons-368929)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/boot2docker.iso'/>
	I0915 06:30:35.514178   13942 main.go:141] libmachine: (addons-368929)       <target dev='hdc' bus='scsi'/>
	I0915 06:30:35.514196   13942 main.go:141] libmachine: (addons-368929)       <readonly/>
	I0915 06:30:35.514210   13942 main.go:141] libmachine: (addons-368929)     </disk>
	I0915 06:30:35.514224   13942 main.go:141] libmachine: (addons-368929)     <disk type='file' device='disk'>
	I0915 06:30:35.514233   13942 main.go:141] libmachine: (addons-368929)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0915 06:30:35.514247   13942 main.go:141] libmachine: (addons-368929)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/addons-368929.rawdisk'/>
	I0915 06:30:35.514254   13942 main.go:141] libmachine: (addons-368929)       <target dev='hda' bus='virtio'/>
	I0915 06:30:35.514259   13942 main.go:141] libmachine: (addons-368929)     </disk>
	I0915 06:30:35.514272   13942 main.go:141] libmachine: (addons-368929)     <interface type='network'>
	I0915 06:30:35.514279   13942 main.go:141] libmachine: (addons-368929)       <source network='mk-addons-368929'/>
	I0915 06:30:35.514284   13942 main.go:141] libmachine: (addons-368929)       <model type='virtio'/>
	I0915 06:30:35.514291   13942 main.go:141] libmachine: (addons-368929)     </interface>
	I0915 06:30:35.514298   13942 main.go:141] libmachine: (addons-368929)     <interface type='network'>
	I0915 06:30:35.514327   13942 main.go:141] libmachine: (addons-368929)       <source network='default'/>
	I0915 06:30:35.514346   13942 main.go:141] libmachine: (addons-368929)       <model type='virtio'/>
	I0915 06:30:35.514353   13942 main.go:141] libmachine: (addons-368929)     </interface>
	I0915 06:30:35.514363   13942 main.go:141] libmachine: (addons-368929)     <serial type='pty'>
	I0915 06:30:35.514370   13942 main.go:141] libmachine: (addons-368929)       <target port='0'/>
	I0915 06:30:35.514375   13942 main.go:141] libmachine: (addons-368929)     </serial>
	I0915 06:30:35.514382   13942 main.go:141] libmachine: (addons-368929)     <console type='pty'>
	I0915 06:30:35.514401   13942 main.go:141] libmachine: (addons-368929)       <target type='serial' port='0'/>
	I0915 06:30:35.514411   13942 main.go:141] libmachine: (addons-368929)     </console>
	I0915 06:30:35.514423   13942 main.go:141] libmachine: (addons-368929)     <rng model='virtio'>
	I0915 06:30:35.514431   13942 main.go:141] libmachine: (addons-368929)       <backend model='random'>/dev/random</backend>
	I0915 06:30:35.514440   13942 main.go:141] libmachine: (addons-368929)     </rng>
	I0915 06:30:35.514452   13942 main.go:141] libmachine: (addons-368929)     
	I0915 06:30:35.514462   13942 main.go:141] libmachine: (addons-368929)     
	I0915 06:30:35.514471   13942 main.go:141] libmachine: (addons-368929)   </devices>
	I0915 06:30:35.514478   13942 main.go:141] libmachine: (addons-368929) </domain>
	I0915 06:30:35.514493   13942 main.go:141] libmachine: (addons-368929) 
	I0915 06:30:35.519732   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:97:d7:7e in network default
	I0915 06:30:35.520190   13942 main.go:141] libmachine: (addons-368929) Ensuring networks are active...
	I0915 06:30:35.520223   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:35.520835   13942 main.go:141] libmachine: (addons-368929) Ensuring network default is active
	I0915 06:30:35.521094   13942 main.go:141] libmachine: (addons-368929) Ensuring network mk-addons-368929 is active
	I0915 06:30:35.521540   13942 main.go:141] libmachine: (addons-368929) Getting domain xml...
	I0915 06:30:35.522139   13942 main.go:141] libmachine: (addons-368929) Creating domain...
	I0915 06:30:36.911230   13942 main.go:141] libmachine: (addons-368929) Waiting to get IP...
	I0915 06:30:36.912033   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:36.912348   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:36.912367   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:36.912342   13964 retry.go:31] will retry after 305.621927ms: waiting for machine to come up
	I0915 06:30:37.219791   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:37.220118   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:37.220142   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:37.220077   13964 retry.go:31] will retry after 369.163907ms: waiting for machine to come up
	I0915 06:30:37.590495   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:37.590957   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:37.590982   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:37.590911   13964 retry.go:31] will retry after 359.18262ms: waiting for machine to come up
	I0915 06:30:37.951271   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:37.951735   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:37.951766   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:37.951687   13964 retry.go:31] will retry after 431.887952ms: waiting for machine to come up
	I0915 06:30:38.385216   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:38.385618   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:38.385654   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:38.385573   13964 retry.go:31] will retry after 586.296252ms: waiting for machine to come up
	I0915 06:30:38.973375   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:38.973835   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:38.973871   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:38.973742   13964 retry.go:31] will retry after 586.258738ms: waiting for machine to come up
	I0915 06:30:39.561452   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:39.561928   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:39.561949   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:39.561894   13964 retry.go:31] will retry after 904.897765ms: waiting for machine to come up
	I0915 06:30:40.468462   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:40.468857   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:40.468885   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:40.468834   13964 retry.go:31] will retry after 1.465267821s: waiting for machine to come up
	I0915 06:30:41.936456   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:41.936817   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:41.936840   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:41.936771   13964 retry.go:31] will retry after 1.712738986s: waiting for machine to come up
	I0915 06:30:43.651694   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:43.652084   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:43.652108   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:43.652035   13964 retry.go:31] will retry after 2.008845539s: waiting for machine to come up
	I0915 06:30:45.663024   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:45.663547   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:45.663573   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:45.663481   13964 retry.go:31] will retry after 2.586699686s: waiting for machine to come up
	I0915 06:30:48.251434   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:48.251775   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:48.251796   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:48.251742   13964 retry.go:31] will retry after 2.759887359s: waiting for machine to come up
	I0915 06:30:51.013703   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:51.014097   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find current IP address of domain addons-368929 in network mk-addons-368929
	I0915 06:30:51.014135   13942 main.go:141] libmachine: (addons-368929) DBG | I0915 06:30:51.014061   13964 retry.go:31] will retry after 4.488920728s: waiting for machine to come up
	I0915 06:30:55.504672   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.505169   13942 main.go:141] libmachine: (addons-368929) Found IP for machine: 192.168.39.212
	I0915 06:30:55.505195   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has current primary IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.505204   13942 main.go:141] libmachine: (addons-368929) Reserving static IP address...
	I0915 06:30:55.505525   13942 main.go:141] libmachine: (addons-368929) DBG | unable to find host DHCP lease matching {name: "addons-368929", mac: "52:54:00:b0:ac:60", ip: "192.168.39.212"} in network mk-addons-368929
	I0915 06:30:55.572968   13942 main.go:141] libmachine: (addons-368929) DBG | Getting to WaitForSSH function...
	I0915 06:30:55.573003   13942 main.go:141] libmachine: (addons-368929) Reserved static IP address: 192.168.39.212
	I0915 06:30:55.573015   13942 main.go:141] libmachine: (addons-368929) Waiting for SSH to be available...
	I0915 06:30:55.575550   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.575899   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:55.575919   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.576162   13942 main.go:141] libmachine: (addons-368929) DBG | Using SSH client type: external
	I0915 06:30:55.576193   13942 main.go:141] libmachine: (addons-368929) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa (-rw-------)
	I0915 06:30:55.576224   13942 main.go:141] libmachine: (addons-368929) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 06:30:55.576241   13942 main.go:141] libmachine: (addons-368929) DBG | About to run SSH command:
	I0915 06:30:55.576256   13942 main.go:141] libmachine: (addons-368929) DBG | exit 0
	I0915 06:30:55.705901   13942 main.go:141] libmachine: (addons-368929) DBG | SSH cmd err, output: <nil>: 
	I0915 06:30:55.706188   13942 main.go:141] libmachine: (addons-368929) KVM machine creation complete!
	I0915 06:30:55.706473   13942 main.go:141] libmachine: (addons-368929) Calling .GetConfigRaw
	I0915 06:30:55.707031   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:55.707200   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:55.707361   13942 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0915 06:30:55.707372   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:30:55.708643   13942 main.go:141] libmachine: Detecting operating system of created instance...
	I0915 06:30:55.708660   13942 main.go:141] libmachine: Waiting for SSH to be available...
	I0915 06:30:55.708667   13942 main.go:141] libmachine: Getting to WaitForSSH function...
	I0915 06:30:55.708675   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:55.710847   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.711159   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:55.711187   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.711316   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:55.711564   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.711697   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.711844   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:55.712017   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:55.712184   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:55.712193   13942 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0915 06:30:55.812983   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:30:55.813004   13942 main.go:141] libmachine: Detecting the provisioner...
	I0915 06:30:55.813010   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:55.815500   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.815897   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:55.815925   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.816042   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:55.816221   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.816381   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.816518   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:55.816670   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:55.816829   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:55.816839   13942 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0915 06:30:55.918360   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0915 06:30:55.918439   13942 main.go:141] libmachine: found compatible host: buildroot
	I0915 06:30:55.918448   13942 main.go:141] libmachine: Provisioning with buildroot...
	I0915 06:30:55.918454   13942 main.go:141] libmachine: (addons-368929) Calling .GetMachineName
	I0915 06:30:55.918690   13942 buildroot.go:166] provisioning hostname "addons-368929"
	I0915 06:30:55.918711   13942 main.go:141] libmachine: (addons-368929) Calling .GetMachineName
	I0915 06:30:55.918840   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:55.920966   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.921446   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:55.921474   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:55.921659   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:55.921826   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.921967   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:55.922063   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:55.922230   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:55.922377   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:55.922388   13942 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-368929 && echo "addons-368929" | sudo tee /etc/hostname
	I0915 06:30:56.039825   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-368929
	
	I0915 06:30:56.039850   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.042251   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.042524   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.042543   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.042750   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.042921   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.043023   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.043132   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.043236   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:56.043381   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:56.043395   13942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-368929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-368929/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-368929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 06:30:56.154978   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:30:56.155020   13942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 06:30:56.155050   13942 buildroot.go:174] setting up certificates
	I0915 06:30:56.155069   13942 provision.go:84] configureAuth start
	I0915 06:30:56.155094   13942 main.go:141] libmachine: (addons-368929) Calling .GetMachineName
	I0915 06:30:56.155378   13942 main.go:141] libmachine: (addons-368929) Calling .GetIP
	I0915 06:30:56.157861   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.158130   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.158164   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.158372   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.160429   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.160700   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.160725   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.160840   13942 provision.go:143] copyHostCerts
	I0915 06:30:56.160923   13942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 06:30:56.161059   13942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 06:30:56.161236   13942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 06:30:56.161313   13942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.addons-368929 san=[127.0.0.1 192.168.39.212 addons-368929 localhost minikube]
	I0915 06:30:56.248249   13942 provision.go:177] copyRemoteCerts
	I0915 06:30:56.248322   13942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 06:30:56.248351   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.251283   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.251603   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.251636   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.251851   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.252026   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.252134   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.252249   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:30:56.336360   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 06:30:56.360914   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 06:30:56.385134   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 06:30:56.408123   13942 provision.go:87] duration metric: took 253.040376ms to configureAuth
	I0915 06:30:56.408147   13942 buildroot.go:189] setting minikube options for container-runtime
	I0915 06:30:56.408302   13942 config.go:182] Loaded profile config "addons-368929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:30:56.408370   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.410873   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.411209   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.411236   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.411382   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.411556   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.411726   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.411866   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.412039   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:56.412202   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:56.412215   13942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 06:30:56.625572   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 06:30:56.625596   13942 main.go:141] libmachine: Checking connection to Docker...
	I0915 06:30:56.625603   13942 main.go:141] libmachine: (addons-368929) Calling .GetURL
	I0915 06:30:56.626810   13942 main.go:141] libmachine: (addons-368929) DBG | Using libvirt version 6000000
	I0915 06:30:56.628657   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.628951   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.628973   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.629143   13942 main.go:141] libmachine: Docker is up and running!
	I0915 06:30:56.629155   13942 main.go:141] libmachine: Reticulating splines...
	I0915 06:30:56.629162   13942 client.go:171] duration metric: took 22.044062992s to LocalClient.Create
	I0915 06:30:56.629182   13942 start.go:167] duration metric: took 22.044122374s to libmachine.API.Create "addons-368929"
	I0915 06:30:56.629204   13942 start.go:293] postStartSetup for "addons-368929" (driver="kvm2")
	I0915 06:30:56.629219   13942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 06:30:56.629241   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.629436   13942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 06:30:56.629459   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.631144   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.631446   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.631469   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.631552   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.631671   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.631765   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.631918   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:30:56.712275   13942 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 06:30:56.716708   13942 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 06:30:56.716735   13942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 06:30:56.716821   13942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 06:30:56.716859   13942 start.go:296] duration metric: took 87.643981ms for postStartSetup
	I0915 06:30:56.716897   13942 main.go:141] libmachine: (addons-368929) Calling .GetConfigRaw
	I0915 06:30:56.717419   13942 main.go:141] libmachine: (addons-368929) Calling .GetIP
	I0915 06:30:56.719736   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.720131   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.720166   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.720394   13942 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/config.json ...
	I0915 06:30:56.720616   13942 start.go:128] duration metric: took 22.152940074s to createHost
	I0915 06:30:56.720641   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.722803   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.723117   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.723157   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.723308   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.723466   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.723612   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.723752   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.723900   13942 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:56.724053   13942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0915 06:30:56.724062   13942 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 06:30:56.826287   13942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726381856.792100710
	
	I0915 06:30:56.826308   13942 fix.go:216] guest clock: 1726381856.792100710
	I0915 06:30:56.826317   13942 fix.go:229] Guest: 2024-09-15 06:30:56.79210071 +0000 UTC Remote: 2024-09-15 06:30:56.720628741 +0000 UTC m=+22.251007338 (delta=71.471969ms)
	I0915 06:30:56.826365   13942 fix.go:200] guest clock delta is within tolerance: 71.471969ms
	I0915 06:30:56.826373   13942 start.go:83] releasing machines lock for "addons-368929", held for 22.25878368s
	I0915 06:30:56.826395   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.826655   13942 main.go:141] libmachine: (addons-368929) Calling .GetIP
	I0915 06:30:56.828977   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.829310   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.829334   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.829599   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.830090   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.830276   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:30:56.830359   13942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 06:30:56.830415   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.830460   13942 ssh_runner.go:195] Run: cat /version.json
	I0915 06:30:56.830484   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:30:56.833094   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.833320   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.833452   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.833493   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.833613   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.833768   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:56.833779   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.833801   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:56.833988   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:30:56.833998   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.834119   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:30:56.834185   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:30:56.834246   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:30:56.834495   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:30:56.938490   13942 ssh_runner.go:195] Run: systemctl --version
	I0915 06:30:56.944445   13942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 06:30:57.102745   13942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 06:30:57.108913   13942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 06:30:57.108984   13942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:30:57.124469   13942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 06:30:57.124494   13942 start.go:495] detecting cgroup driver to use...
	I0915 06:30:57.124559   13942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 06:30:57.141386   13942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 06:30:57.155119   13942 docker.go:217] disabling cri-docker service (if available) ...
	I0915 06:30:57.155185   13942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 06:30:57.168695   13942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 06:30:57.182111   13942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 06:30:57.306290   13942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 06:30:57.442868   13942 docker.go:233] disabling docker service ...
	I0915 06:30:57.442931   13942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 06:30:57.456992   13942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 06:30:57.470375   13942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 06:30:57.613118   13942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 06:30:57.736610   13942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 06:30:57.750704   13942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:30:57.769455   13942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 06:30:57.769509   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.779795   13942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 06:30:57.779873   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.790360   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.800573   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.811474   13942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 06:30:57.822289   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.832671   13942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.849736   13942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:57.860236   13942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 06:30:57.869843   13942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0915 06:30:57.869913   13942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0915 06:30:57.883852   13942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 06:30:57.893890   13942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:58.013644   13942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 06:30:58.112843   13942 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 06:30:58.112948   13942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 06:30:58.119889   13942 start.go:563] Will wait 60s for crictl version
	I0915 06:30:58.119973   13942 ssh_runner.go:195] Run: which crictl
	I0915 06:30:58.123756   13942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 06:30:58.159622   13942 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 06:30:58.159742   13942 ssh_runner.go:195] Run: crio --version
	I0915 06:30:58.186651   13942 ssh_runner.go:195] Run: crio --version
	I0915 06:30:58.215616   13942 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 06:30:58.216928   13942 main.go:141] libmachine: (addons-368929) Calling .GetIP
	I0915 06:30:58.219246   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:58.219519   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:30:58.219540   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:30:58.219725   13942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0915 06:30:58.223999   13942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:30:58.236938   13942 kubeadm.go:883] updating cluster {Name:addons-368929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 06:30:58.237037   13942 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:30:58.237078   13942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:30:58.273590   13942 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0915 06:30:58.273648   13942 ssh_runner.go:195] Run: which lz4
	I0915 06:30:58.277802   13942 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0915 06:30:58.282345   13942 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0915 06:30:58.282370   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0915 06:30:59.603321   13942 crio.go:462] duration metric: took 1.325549194s to copy over tarball
	I0915 06:30:59.603391   13942 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0915 06:31:01.698248   13942 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.094830019s)
	I0915 06:31:01.698276   13942 crio.go:469] duration metric: took 2.094925403s to extract the tarball
	I0915 06:31:01.698286   13942 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0915 06:31:01.735576   13942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:31:01.777236   13942 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:31:01.777262   13942 cache_images.go:84] Images are preloaded, skipping loading
	I0915 06:31:01.777272   13942 kubeadm.go:934] updating node { 192.168.39.212 8443 v1.31.1 crio true true} ...
	I0915 06:31:01.777361   13942 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-368929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 06:31:01.777425   13942 ssh_runner.go:195] Run: crio config
	I0915 06:31:01.819719   13942 cni.go:84] Creating CNI manager for ""
	I0915 06:31:01.819741   13942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 06:31:01.819753   13942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 06:31:01.819775   13942 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-368929 NodeName:addons-368929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 06:31:01.819928   13942 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-368929"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 06:31:01.820001   13942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 06:31:01.830202   13942 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 06:31:01.830264   13942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 06:31:01.840653   13942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0915 06:31:01.859116   13942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 06:31:01.876520   13942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0915 06:31:01.893776   13942 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I0915 06:31:01.897643   13942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:31:01.910584   13942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:31:02.038664   13942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:31:02.055783   13942 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929 for IP: 192.168.39.212
	I0915 06:31:02.055810   13942 certs.go:194] generating shared ca certs ...
	I0915 06:31:02.055829   13942 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.055990   13942 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 06:31:02.153706   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt ...
	I0915 06:31:02.153733   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt: {Name:mk72efeae7a5e079e02dddca5ae1326e66b50791 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.153893   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key ...
	I0915 06:31:02.153904   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key: {Name:mk60adb75b67a4ecb03ce39bc98fc22d93504324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.153974   13942 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 06:31:02.294105   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt ...
	I0915 06:31:02.294129   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt: {Name:mk6ad9572391112128f71a73d401b2f36e5187ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.294270   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key ...
	I0915 06:31:02.294280   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key: {Name:mk997129f7d8042b546775ee409cc0c02ea66874 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.294341   13942 certs.go:256] generating profile certs ...
	I0915 06:31:02.294402   13942 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.key
	I0915 06:31:02.294422   13942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt with IP's: []
	I0915 06:31:02.474521   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt ...
	I0915 06:31:02.474552   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: {Name:mk5230116ec10f82362ea4d2c021febd7553501e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.474711   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.key ...
	I0915 06:31:02.474722   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.key: {Name:mk4c7cfc18d39b7a5234396e9e59579ecd48ad76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.474787   13942 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key.2bf3d68f
	I0915 06:31:02.474804   13942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt.2bf3d68f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212]
	I0915 06:31:02.564099   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt.2bf3d68f ...
	I0915 06:31:02.564130   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt.2bf3d68f: {Name:mkc23c9f9e76c0a988b86d564062dd840e1d35eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.564279   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key.2bf3d68f ...
	I0915 06:31:02.564291   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key.2bf3d68f: {Name:mk4e887c90c5c7adca7e638dabe3b3c3ddd2bf81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.564361   13942 certs.go:381] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt.2bf3d68f -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt
	I0915 06:31:02.564435   13942 certs.go:385] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key.2bf3d68f -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key
	I0915 06:31:02.564480   13942 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.key
	I0915 06:31:02.564496   13942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.crt with IP's: []
	I0915 06:31:02.689851   13942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.crt ...
	I0915 06:31:02.689879   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.crt: {Name:mk64a1aa0a2a68e9a444363c01c5932bf3e0851a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.690029   13942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.key ...
	I0915 06:31:02.690039   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.key: {Name:mk7c8d3875c49566ea32a3445025bddf158772fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:02.690216   13942 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 06:31:02.690247   13942 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 06:31:02.690274   13942 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 06:31:02.690296   13942 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 06:31:02.690807   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 06:31:02.716623   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 06:31:02.745150   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 06:31:02.773869   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 06:31:02.798062   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 06:31:02.820956   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 06:31:02.844972   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 06:31:02.869179   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 06:31:02.893630   13942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 06:31:02.917474   13942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 06:31:02.934168   13942 ssh_runner.go:195] Run: openssl version
	I0915 06:31:02.940062   13942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 06:31:02.951007   13942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:31:02.955419   13942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:31:02.955475   13942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:31:02.961175   13942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 06:31:02.972122   13942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:31:02.976566   13942 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 06:31:02.976612   13942 kubeadm.go:392] StartCluster: {Name:addons-368929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-368929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:31:02.976677   13942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 06:31:02.976718   13942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 06:31:03.012559   13942 cri.go:89] found id: ""
	I0915 06:31:03.012619   13942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 06:31:03.022968   13942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 06:31:03.032884   13942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 06:31:03.042781   13942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 06:31:03.042798   13942 kubeadm.go:157] found existing configuration files:
	
	I0915 06:31:03.042840   13942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 06:31:03.052268   13942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 06:31:03.052318   13942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 06:31:03.062232   13942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 06:31:03.071324   13942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 06:31:03.071379   13942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 06:31:03.080551   13942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 06:31:03.089375   13942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 06:31:03.089424   13942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 06:31:03.099002   13942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 06:31:03.108163   13942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 06:31:03.108213   13942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 06:31:03.117874   13942 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 06:31:03.179081   13942 kubeadm.go:310] W0915 06:31:03.150215     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:31:03.179952   13942 kubeadm.go:310] W0915 06:31:03.151258     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:31:03.288765   13942 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 06:31:13.244212   13942 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 06:31:13.244285   13942 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 06:31:13.244371   13942 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 06:31:13.244504   13942 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 06:31:13.244637   13942 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 06:31:13.244724   13942 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 06:31:13.246462   13942 out.go:235]   - Generating certificates and keys ...
	I0915 06:31:13.246540   13942 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 06:31:13.246602   13942 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 06:31:13.246676   13942 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 06:31:13.246741   13942 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 06:31:13.246798   13942 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 06:31:13.246841   13942 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 06:31:13.246910   13942 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 06:31:13.247029   13942 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-368929 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0915 06:31:13.247105   13942 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 06:31:13.247259   13942 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-368929 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0915 06:31:13.247354   13942 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 06:31:13.247454   13942 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 06:31:13.247496   13942 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 06:31:13.247569   13942 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 06:31:13.247649   13942 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 06:31:13.247737   13942 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 06:31:13.247812   13942 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 06:31:13.247905   13942 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 06:31:13.247987   13942 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 06:31:13.248103   13942 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 06:31:13.248230   13942 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 06:31:13.249711   13942 out.go:235]   - Booting up control plane ...
	I0915 06:31:13.249799   13942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 06:31:13.249895   13942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 06:31:13.249949   13942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 06:31:13.250075   13942 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 06:31:13.250170   13942 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 06:31:13.250212   13942 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 06:31:13.250324   13942 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 06:31:13.250471   13942 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 06:31:13.250554   13942 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000955995s
	I0915 06:31:13.250648   13942 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 06:31:13.250740   13942 kubeadm.go:310] [api-check] The API server is healthy after 5.001828524s
	I0915 06:31:13.250879   13942 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 06:31:13.250988   13942 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 06:31:13.251068   13942 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 06:31:13.251284   13942 kubeadm.go:310] [mark-control-plane] Marking the node addons-368929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 06:31:13.251342   13942 kubeadm.go:310] [bootstrap-token] Using token: 0sj1hx.q1rkmq819x572pmn
	I0915 06:31:13.252875   13942 out.go:235]   - Configuring RBAC rules ...
	I0915 06:31:13.253007   13942 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 06:31:13.253098   13942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 06:31:13.253263   13942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 06:31:13.253367   13942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 06:31:13.253467   13942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 06:31:13.253534   13942 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 06:31:13.253646   13942 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 06:31:13.253696   13942 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 06:31:13.253766   13942 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 06:31:13.253779   13942 kubeadm.go:310] 
	I0915 06:31:13.253880   13942 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 06:31:13.253892   13942 kubeadm.go:310] 
	I0915 06:31:13.253965   13942 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 06:31:13.253973   13942 kubeadm.go:310] 
	I0915 06:31:13.253994   13942 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 06:31:13.254066   13942 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 06:31:13.254144   13942 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 06:31:13.254155   13942 kubeadm.go:310] 
	I0915 06:31:13.254229   13942 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 06:31:13.254238   13942 kubeadm.go:310] 
	I0915 06:31:13.254305   13942 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 06:31:13.254315   13942 kubeadm.go:310] 
	I0915 06:31:13.254361   13942 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 06:31:13.254433   13942 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 06:31:13.254531   13942 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 06:31:13.254543   13942 kubeadm.go:310] 
	I0915 06:31:13.254651   13942 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 06:31:13.254721   13942 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 06:31:13.254735   13942 kubeadm.go:310] 
	I0915 06:31:13.254843   13942 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0sj1hx.q1rkmq819x572pmn \
	I0915 06:31:13.254928   13942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b \
	I0915 06:31:13.254946   13942 kubeadm.go:310] 	--control-plane 
	I0915 06:31:13.254952   13942 kubeadm.go:310] 
	I0915 06:31:13.255027   13942 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 06:31:13.255036   13942 kubeadm.go:310] 
	I0915 06:31:13.255108   13942 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0sj1hx.q1rkmq819x572pmn \
	I0915 06:31:13.255213   13942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b 
	I0915 06:31:13.255224   13942 cni.go:84] Creating CNI manager for ""
	I0915 06:31:13.255230   13942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 06:31:13.256846   13942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 06:31:13.258367   13942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 06:31:13.269533   13942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 06:31:13.286955   13942 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 06:31:13.287033   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:13.287047   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-368929 minikube.k8s.io/updated_at=2024_09_15T06_31_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=addons-368929 minikube.k8s.io/primary=true
	I0915 06:31:13.439577   13942 ops.go:34] apiserver oom_adj: -16
	I0915 06:31:13.439619   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:13.939804   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:14.440122   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:14.939768   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:15.440612   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:15.940408   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:16.439804   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:16.940340   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:17.440583   13942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:31:17.530382   13942 kubeadm.go:1113] duration metric: took 4.243409251s to wait for elevateKubeSystemPrivileges
	I0915 06:31:17.530429   13942 kubeadm.go:394] duration metric: took 14.553819023s to StartCluster
	I0915 06:31:17.530452   13942 settings.go:142] acquiring lock: {Name:mkf5235d72fa0db4ee272126c244284fe5de298a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:17.530582   13942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 06:31:17.530898   13942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/kubeconfig: {Name:mk08ce8dc701ab8f3c73b1f6ae730da0fbf561bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:31:17.531115   13942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 06:31:17.531117   13942 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:31:17.531135   13942 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 06:31:17.531245   13942 addons.go:69] Setting yakd=true in profile "addons-368929"
	I0915 06:31:17.531264   13942 addons.go:234] Setting addon yakd=true in "addons-368929"
	I0915 06:31:17.531291   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531295   13942 config.go:182] Loaded profile config "addons-368929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:31:17.531303   13942 addons.go:69] Setting ingress-dns=true in profile "addons-368929"
	I0915 06:31:17.531317   13942 addons.go:69] Setting default-storageclass=true in profile "addons-368929"
	I0915 06:31:17.531326   13942 addons.go:234] Setting addon ingress-dns=true in "addons-368929"
	I0915 06:31:17.531335   13942 addons.go:69] Setting metrics-server=true in profile "addons-368929"
	I0915 06:31:17.531338   13942 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-368929"
	I0915 06:31:17.531337   13942 addons.go:69] Setting registry=true in profile "addons-368929"
	I0915 06:31:17.531349   13942 addons.go:234] Setting addon metrics-server=true in "addons-368929"
	I0915 06:31:17.531345   13942 addons.go:69] Setting inspektor-gadget=true in profile "addons-368929"
	I0915 06:31:17.531359   13942 addons.go:234] Setting addon registry=true in "addons-368929"
	I0915 06:31:17.531366   13942 addons.go:234] Setting addon inspektor-gadget=true in "addons-368929"
	I0915 06:31:17.531374   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531366   13942 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-368929"
	I0915 06:31:17.531389   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531390   13942 addons.go:69] Setting ingress=true in profile "addons-368929"
	I0915 06:31:17.531398   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531406   13942 addons.go:234] Setting addon ingress=true in "addons-368929"
	I0915 06:31:17.531416   13942 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-368929"
	I0915 06:31:17.531429   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531441   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531763   13942 addons.go:69] Setting storage-provisioner=true in profile "addons-368929"
	I0915 06:31:17.531769   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531778   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531782   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531785   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531825   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531788   13942 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-368929"
	I0915 06:31:17.531921   13942 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-368929"
	I0915 06:31:17.531375   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531784   13942 addons.go:234] Setting addon storage-provisioner=true in "addons-368929"
	I0915 06:31:17.532163   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531796   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531381   13942 addons.go:69] Setting gcp-auth=true in profile "addons-368929"
	I0915 06:31:17.532282   13942 mustload.go:65] Loading cluster: addons-368929
	I0915 06:31:17.532299   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.532333   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.532362   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.532377   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.532462   13942 config.go:182] Loaded profile config "addons-368929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:31:17.532536   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.532574   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531801   13942 addons.go:69] Setting volcano=true in profile "addons-368929"
	I0915 06:31:17.532649   13942 addons.go:234] Setting addon volcano=true in "addons-368929"
	I0915 06:31:17.532676   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531802   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.532807   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.532834   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.533044   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.531799   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.533082   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.533100   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531802   13942 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-368929"
	I0915 06:31:17.533268   13942 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-368929"
	I0915 06:31:17.533292   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531799   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.533422   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531386   13942 addons.go:69] Setting helm-tiller=true in profile "addons-368929"
	I0915 06:31:17.533579   13942 addons.go:234] Setting addon helm-tiller=true in "addons-368929"
	I0915 06:31:17.533603   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.533660   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.533677   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531808   13942 addons.go:69] Setting cloud-spanner=true in profile "addons-368929"
	I0915 06:31:17.533996   13942 addons.go:234] Setting addon cloud-spanner=true in "addons-368929"
	I0915 06:31:17.534023   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.531808   13942 addons.go:69] Setting volumesnapshots=true in profile "addons-368929"
	I0915 06:31:17.534072   13942 addons.go:234] Setting addon volumesnapshots=true in "addons-368929"
	I0915 06:31:17.534098   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.534391   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.534396   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.534404   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.534410   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.544717   13942 out.go:177] * Verifying Kubernetes components...
	I0915 06:31:17.531817   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.531900   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.546517   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.551069   13942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:31:17.552863   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0915 06:31:17.552873   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36587
	I0915 06:31:17.553975   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0915 06:31:17.554008   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39267
	I0915 06:31:17.554479   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.554606   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.554630   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.554982   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.555001   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.555033   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43615
	I0915 06:31:17.555190   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.555399   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.555473   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.556128   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.556141   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.556194   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.556312   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.556324   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.556379   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.556441   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.556504   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.556665   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.557213   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.557249   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.560223   13942 addons.go:234] Setting addon default-storageclass=true in "addons-368929"
	I0915 06:31:17.560260   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.560623   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.560654   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.562235   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.562259   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.562337   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.562459   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.562469   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.564071   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.564137   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.564190   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0915 06:31:17.564701   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.564732   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.565696   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.565803   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0915 06:31:17.566345   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.566413   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.566440   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.566451   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.566783   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.566811   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.568220   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.568238   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.568363   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.568373   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.568586   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.568722   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.575956   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.578834   13942 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-368929"
	I0915 06:31:17.578915   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:17.579206   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.579264   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.586757   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45717
	I0915 06:31:17.587453   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.587903   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.587915   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.588249   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.588667   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.588681   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.589379   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0915 06:31:17.591499   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43375
	I0915 06:31:17.592121   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.592540   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35411
	I0915 06:31:17.592775   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.592797   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.593043   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.593129   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.593632   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.593670   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.594252   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.594269   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.594288   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.594321   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.594721   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.595309   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.595327   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.595709   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.596188   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.597875   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.598751   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.599189   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.599228   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.600165   13942 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:31:17.601729   13942 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:31:17.601752   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 06:31:17.601771   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.605356   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.605714   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.605733   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.606017   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.606225   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.606363   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.606488   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.608862   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0915 06:31:17.609332   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.609839   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.609855   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.610126   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0915 06:31:17.610221   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.610370   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.610667   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.611155   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.611171   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.611594   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.612184   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.612207   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.612239   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.613742   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I0915 06:31:17.614273   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.614432   13942 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 06:31:17.614832   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.614856   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.615194   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.615706   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.615749   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.615938   13942 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:31:17.615956   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 06:31:17.615977   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.618736   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I0915 06:31:17.619406   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.619549   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.619887   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.619906   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.619934   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I0915 06:31:17.619991   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.620005   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.620125   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.620284   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.620306   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.620389   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.620439   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.620546   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.620912   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.620929   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.621094   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.621127   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.621225   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.621390   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.623143   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.624009   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0915 06:31:17.624078   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0915 06:31:17.624757   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.625302   13942 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 06:31:17.625323   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.625341   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.625640   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.626189   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.626227   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.626492   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0915 06:31:17.626724   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.627122   13942 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 06:31:17.627137   13942 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 06:31:17.627150   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.627443   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.627842   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.627858   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.628226   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.628780   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.628824   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.629909   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0915 06:31:17.630269   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.630711   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.630727   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.630778   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.630930   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.630947   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.631293   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.631304   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.631320   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.631317   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.631497   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.631668   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.632017   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.632057   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.632337   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.632451   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.635887   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I0915 06:31:17.636212   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.636655   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.636671   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.637236   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.637272   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.637490   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.637666   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.639294   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.641479   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 06:31:17.642960   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 06:31:17.642978   13942 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 06:31:17.643001   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.646117   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.646502   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.646522   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.646795   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.647022   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.647177   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.647337   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.650261   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38061
	I0915 06:31:17.652110   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0915 06:31:17.652286   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43539
	I0915 06:31:17.652480   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0915 06:31:17.652627   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.652721   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.653099   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.653125   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.653192   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.653334   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.653346   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.653410   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.653645   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.653768   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.653779   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.655709   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45143
	I0915 06:31:17.655715   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.655739   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.655715   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.655788   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.655803   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.655938   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.656099   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.656181   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.656265   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.656670   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.656688   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.656739   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.658305   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.658369   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.658421   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.659062   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.659317   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.660430   13942 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0915 06:31:17.660468   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 06:31:17.660448   13942 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 06:31:17.660836   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.661129   13942 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 06:31:17.661142   13942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 06:31:17.661158   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.661714   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0915 06:31:17.662496   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.662785   13942 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0915 06:31:17.662803   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0915 06:31:17.662819   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.663231   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.663251   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.663851   13942 out.go:177]   - Using image docker.io/busybox:stable
	I0915 06:31:17.663851   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 06:31:17.665526   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 06:31:17.665540   13942 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:31:17.665573   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 06:31:17.665590   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.666518   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.667159   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:17.667209   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:17.667518   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.667975   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 06:31:17.668218   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.668405   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.668924   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.668959   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.669158   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.669315   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.669371   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.669386   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.669496   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.669832   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.670044   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.670171   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.670275   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.670559   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 06:31:17.671441   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.672225   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.672238   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.672404   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.672567   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.672724   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.672859   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.673060   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32869
	I0915 06:31:17.673167   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36501
	I0915 06:31:17.673197   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 06:31:17.673464   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.673593   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.674180   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.674197   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.674602   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.674866   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.674970   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0915 06:31:17.676021   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.676113   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0915 06:31:17.676325   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.676341   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.676424   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0915 06:31:17.676962   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.677312   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.677398   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.677414   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.677562   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.677584   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.677859   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.678040   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.678549   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I0915 06:31:17.679078   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.679084   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0915 06:31:17.679106   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.679181   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.679630   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.679647   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.679655   13942 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 06:31:17.679706   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.679708   13942 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 06:31:17.679755   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.679825   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.679985   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.680370   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.680389   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.680459   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.680667   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.680925   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.681204   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 06:31:17.681687   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 06:31:17.681708   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.681215   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.681887   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.682415   13942 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 06:31:17.682606   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.682597   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.682703   13942 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 06:31:17.683242   13942 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 06:31:17.684048   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 06:31:17.684064   13942 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 06:31:17.684082   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.684243   13942 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 06:31:17.684541   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.685180   13942 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 06:31:17.685283   13942 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 06:31:17.685293   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 06:31:17.685309   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.685970   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.687103   13942 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 06:31:17.687121   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 06:31:17.687139   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.687240   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.687254   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.687302   13942 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 06:31:17.687385   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.687553   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.687909   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.688208   13942 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:31:17.688311   13942 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 06:31:17.688326   13942 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 06:31:17.688342   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.688954   13942 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:31:17.688971   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 06:31:17.688986   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.689661   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.689991   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.690025   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.690043   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.690736   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.691322   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.691400   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.691795   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.691992   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.692014   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.692081   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.692213   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.692319   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.692403   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.692846   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.693446   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.694070   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.694103   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.694326   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.694567   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.694594   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.694685   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.694776   13942 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:31:17.694916   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.694936   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.694974   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.695197   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.695468   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.695660   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.695792   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.696758   13942 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:31:17.696772   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 06:31:17.696794   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:17.696898   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0915 06:31:17.697337   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:17.697347   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.697891   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:17.697904   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:17.698246   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:17.698537   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.698553   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.698595   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:17.698766   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.698883   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.698993   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.699115   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.699868   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.700039   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:17.700238   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:17.700244   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:17.700527   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:17.700539   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:17.700557   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:17.700564   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:17.700571   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:17.700585   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:17.700712   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:17.700759   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:17.700775   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:17.700779   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	W0915 06:31:17.700830   13942 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0915 06:31:17.701036   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:17.701127   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:17.701199   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:17.974082   13942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:31:17.974246   13942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 06:31:18.029440   13942 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0915 06:31:18.029460   13942 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0915 06:31:18.067088   13942 node_ready.go:35] waiting up to 6m0s for node "addons-368929" to be "Ready" ...
	I0915 06:31:18.078224   13942 node_ready.go:49] node "addons-368929" has status "Ready":"True"
	I0915 06:31:18.078251   13942 node_ready.go:38] duration metric: took 11.135756ms for node "addons-368929" to be "Ready" ...
	I0915 06:31:18.078264   13942 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:31:18.135940   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 06:31:18.135964   13942 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 06:31:18.141367   13942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:18.199001   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:31:18.204686   13942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 06:31:18.204710   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 06:31:18.222305   13942 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 06:31:18.222333   13942 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 06:31:18.235001   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:31:18.242915   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:31:18.264618   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 06:31:18.264645   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 06:31:18.278064   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:31:18.295028   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:31:18.313913   13942 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 06:31:18.313945   13942 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 06:31:18.321100   13942 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:31:18.321126   13942 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0915 06:31:18.324341   13942 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 06:31:18.324361   13942 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 06:31:18.342086   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:31:18.355928   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 06:31:18.386848   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 06:31:18.386873   13942 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 06:31:18.430309   13942 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 06:31:18.430338   13942 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 06:31:18.436199   13942 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:31:18.436227   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 06:31:18.467018   13942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 06:31:18.467043   13942 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 06:31:18.469097   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 06:31:18.469118   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 06:31:18.475758   13942 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 06:31:18.475776   13942 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 06:31:18.524849   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:31:18.559766   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 06:31:18.559796   13942 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 06:31:18.574119   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:31:18.629489   13942 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 06:31:18.629514   13942 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 06:31:18.636860   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 06:31:18.636883   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 06:31:18.656652   13942 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 06:31:18.656681   13942 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 06:31:18.671346   13942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:31:18.671371   13942 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 06:31:18.776151   13942 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:31:18.776174   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 06:31:18.786697   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 06:31:18.786725   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 06:31:18.790802   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 06:31:18.790824   13942 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 06:31:18.811252   13942 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 06:31:18.811276   13942 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 06:31:18.841135   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:31:18.940848   13942 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:31:18.940871   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 06:31:18.948147   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:31:18.968172   13942 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 06:31:18.968200   13942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 06:31:19.099306   13942 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 06:31:19.099337   13942 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 06:31:19.208753   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:31:19.261571   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 06:31:19.261592   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 06:31:19.427555   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 06:31:19.427591   13942 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 06:31:19.452460   13942 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 06:31:19.452489   13942 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 06:31:19.729819   13942 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.75553178s)
	I0915 06:31:19.729857   13942 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0915 06:31:19.729914   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.530876961s)
	I0915 06:31:19.729955   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:19.729966   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:19.730363   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:19.730385   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:19.730385   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:19.730403   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:19.730418   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:19.730721   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:19.730736   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:19.737048   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:19.737066   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:19.737366   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:19.737390   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:19.737396   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:19.835914   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 06:31:19.835934   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 06:31:19.848468   13942 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:31:19.848493   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 06:31:20.068594   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:31:20.139377   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 06:31:20.139404   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 06:31:20.147456   13942 pod_ready.go:103] pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:20.234504   13942 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-368929" context rescaled to 1 replicas
	I0915 06:31:20.491704   13942 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:31:20.491730   13942 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 06:31:20.932400   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:31:22.212244   13942 pod_ready.go:103] pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:22.409208   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.174166978s)
	I0915 06:31:22.409210   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.166269282s)
	I0915 06:31:22.409299   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:22.409318   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:22.409257   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:22.409391   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:22.409620   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:22.409658   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:22.409665   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:22.409672   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:22.409678   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:22.409744   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:22.409768   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:22.409783   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:22.409793   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:22.409801   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:22.410154   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:22.410195   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:22.410199   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:22.410217   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:22.410251   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:22.410221   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:24.154654   13942 pod_ready.go:93] pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:24.154684   13942 pod_ready.go:82] duration metric: took 6.01329144s for pod "coredns-7c65d6cfc9-d42kz" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:24.154696   13942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:24.756169   13942 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 06:31:24.756215   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:24.759593   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:24.760038   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:24.760065   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:24.760279   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:24.760520   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:24.760709   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:24.760868   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:25.159761   13942 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 06:31:25.482013   13942 addons.go:234] Setting addon gcp-auth=true in "addons-368929"
	I0915 06:31:25.482064   13942 host.go:66] Checking if "addons-368929" exists ...
	I0915 06:31:25.482369   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:25.482396   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:25.497336   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I0915 06:31:25.497758   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:25.498209   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:25.498231   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:25.498517   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:25.499067   13942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:31:25.499103   13942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:31:25.514609   13942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39641
	I0915 06:31:25.515143   13942 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:31:25.515688   13942 main.go:141] libmachine: Using API Version  1
	I0915 06:31:25.515716   13942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:31:25.516029   13942 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:31:25.516249   13942 main.go:141] libmachine: (addons-368929) Calling .GetState
	I0915 06:31:25.517863   13942 main.go:141] libmachine: (addons-368929) Calling .DriverName
	I0915 06:31:25.518086   13942 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 06:31:25.518112   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHHostname
	I0915 06:31:25.520701   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:25.521094   13942 main.go:141] libmachine: (addons-368929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:ac:60", ip: ""} in network mk-addons-368929: {Iface:virbr1 ExpiryTime:2024-09-15 07:30:49 +0000 UTC Type:0 Mac:52:54:00:b0:ac:60 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-368929 Clientid:01:52:54:00:b0:ac:60}
	I0915 06:31:25.521124   13942 main.go:141] libmachine: (addons-368929) DBG | domain addons-368929 has defined IP address 192.168.39.212 and MAC address 52:54:00:b0:ac:60 in network mk-addons-368929
	I0915 06:31:25.521252   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHPort
	I0915 06:31:25.521421   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHKeyPath
	I0915 06:31:25.521577   13942 main.go:141] libmachine: (addons-368929) Calling .GetSSHUsername
	I0915 06:31:25.521709   13942 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/addons-368929/id_rsa Username:docker}
	I0915 06:31:26.232203   13942 pod_ready.go:103] pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:26.243417   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.965315482s)
	I0915 06:31:26.243453   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.948392742s)
	I0915 06:31:26.243471   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243480   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243483   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243491   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243629   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.901516275s)
	I0915 06:31:26.243667   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243675   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.887721395s)
	I0915 06:31:26.243697   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243713   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243752   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.718870428s)
	I0915 06:31:26.243780   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243794   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243853   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.243869   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.243874   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.669731672s)
	I0915 06:31:26.243878   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243886   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243891   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.243899   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243677   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.243962   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.243992   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.243998   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.244005   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244011   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244024   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.402863813s)
	I0915 06:31:26.244039   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244047   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244076   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.244093   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.244094   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.244103   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.244111   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244115   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.244121   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.244127   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.295952452s)
	I0915 06:31:26.244138   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244145   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244147   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244155   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244156   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.244249   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.035463778s)
	W0915 06:31:26.244279   13942 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:31:26.244307   13942 retry.go:31] will retry after 256.93896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:31:26.244415   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.175791562s)
	I0915 06:31:26.244434   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.244443   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.245740   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.245773   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.245783   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.245793   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.245803   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.245868   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.245878   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.245886   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.245892   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.245938   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.245963   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.245982   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.245990   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.245997   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.246004   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.246041   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246060   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246066   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246295   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246321   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246328   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246504   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246547   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246537   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246564   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246564   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246583   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246589   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246624   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246635   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246645   13942 addons.go:475] Verifying addon registry=true in "addons-368929"
	I0915 06:31:26.246763   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246789   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246797   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246808   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.246818   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.246946   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.246973   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.246979   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.246987   13942 addons.go:475] Verifying addon metrics-server=true in "addons-368929"
	I0915 06:31:26.247083   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.247110   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.247120   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.248059   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.248078   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.248087   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.248095   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.248285   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.248299   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.248307   13942 addons.go:475] Verifying addon ingress=true in "addons-368929"
	I0915 06:31:26.248402   13942 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-368929 service yakd-dashboard -n yakd-dashboard
	
	I0915 06:31:26.248878   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:26.248901   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.250860   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.251360   13942 out.go:177] * Verifying registry addon...
	I0915 06:31:26.252258   13942 out.go:177] * Verifying ingress addon...
	I0915 06:31:26.253897   13942 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 06:31:26.254716   13942 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 06:31:26.282507   13942 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 06:31:26.282535   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:26.283231   13942 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:31:26.283254   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:26.326048   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:26.326076   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:26.326366   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:26.326389   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:26.502303   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:31:26.763104   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:26.763404   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:27.464589   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:27.465574   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:27.760221   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:27.760580   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.262507   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:28.263438   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.687007   13942 pod_ready.go:103] pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:28.777944   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:28.778464   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.790673   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.858215393s)
	I0915 06:31:28.790714   13942 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.272605642s)
	I0915 06:31:28.790731   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:28.790749   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:28.790820   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.288483379s)
	I0915 06:31:28.790865   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:28.790883   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:28.791037   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:28.791080   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:28.791088   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:28.791096   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:28.791102   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:28.791119   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:28.791129   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:28.791137   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:28.791143   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:28.791312   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:28.791359   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:28.791365   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:28.791374   13942 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-368929"
	I0915 06:31:28.791536   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:28.791550   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:28.792735   13942 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:31:28.793437   13942 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 06:31:28.795140   13942 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 06:31:28.795935   13942 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 06:31:28.796597   13942 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 06:31:28.796611   13942 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 06:31:28.830229   13942 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:31:28.830253   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:28.871919   13942 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 06:31:28.871943   13942 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 06:31:28.958746   13942 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:31:28.958766   13942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 06:31:28.979296   13942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:31:29.260856   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:29.260969   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:29.300857   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:29.763057   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:29.763185   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:29.815747   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.011418   13942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.032085812s)
	I0915 06:31:30.011471   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:30.011485   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:30.011741   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:30.011804   13942 main.go:141] libmachine: (addons-368929) DBG | Closing plugin on server side
	I0915 06:31:30.011820   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:30.011832   13942 main.go:141] libmachine: Making call to close driver server
	I0915 06:31:30.011842   13942 main.go:141] libmachine: (addons-368929) Calling .Close
	I0915 06:31:30.012069   13942 main.go:141] libmachine: Successfully made call to close driver server
	I0915 06:31:30.012085   13942 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 06:31:30.014149   13942 addons.go:475] Verifying addon gcp-auth=true in "addons-368929"
	I0915 06:31:30.015992   13942 out.go:177] * Verifying gcp-auth addon...
	I0915 06:31:30.018271   13942 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 06:31:30.051440   13942 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:31:30.051458   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:30.261829   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:30.261988   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:30.302477   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.525517   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:30.658488   13942 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-xbx5t" not found
	I0915 06:31:30.658511   13942 pod_ready.go:82] duration metric: took 6.503808371s for pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace to be "Ready" ...
	E0915 06:31:30.658521   13942 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-xbx5t" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-xbx5t" not found
	I0915 06:31:30.658528   13942 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.665242   13942 pod_ready.go:93] pod "etcd-addons-368929" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.665263   13942 pod_ready.go:82] duration metric: took 6.72824ms for pod "etcd-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.665272   13942 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.671635   13942 pod_ready.go:93] pod "kube-apiserver-addons-368929" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.671653   13942 pod_ready.go:82] duration metric: took 6.375828ms for pod "kube-apiserver-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.671661   13942 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.678724   13942 pod_ready.go:93] pod "kube-controller-manager-addons-368929" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.678750   13942 pod_ready.go:82] duration metric: took 7.08028ms for pod "kube-controller-manager-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.678762   13942 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ldpsk" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.687370   13942 pod_ready.go:93] pod "kube-proxy-ldpsk" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.687396   13942 pod_ready.go:82] duration metric: took 8.62656ms for pod "kube-proxy-ldpsk" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.687405   13942 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.767076   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:30.767584   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:30.800983   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.859527   13942 pod_ready.go:93] pod "kube-scheduler-addons-368929" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:30.859556   13942 pod_ready.go:82] duration metric: took 172.143761ms for pod "kube-scheduler-addons-368929" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:30.859566   13942 pod_ready.go:39] duration metric: took 12.781287726s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:31:30.859585   13942 api_server.go:52] waiting for apiserver process to appear ...
	I0915 06:31:30.859643   13942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:31:30.917869   13942 api_server.go:72] duration metric: took 13.386663133s to wait for apiserver process to appear ...
	I0915 06:31:30.917897   13942 api_server.go:88] waiting for apiserver healthz status ...
	I0915 06:31:30.917922   13942 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0915 06:31:30.923875   13942 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I0915 06:31:30.924981   13942 api_server.go:141] control plane version: v1.31.1
	I0915 06:31:30.924999   13942 api_server.go:131] duration metric: took 7.095604ms to wait for apiserver health ...
	I0915 06:31:30.925006   13942 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 06:31:31.022799   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:31.064433   13942 system_pods.go:59] 18 kube-system pods found
	I0915 06:31:31.064467   13942 system_pods.go:61] "coredns-7c65d6cfc9-d42kz" [df259178-5edc-4af0-97ba-206daeab8c29] Running
	I0915 06:31:31.064479   13942 system_pods.go:61] "csi-hostpath-attacher-0" [0adda2d4-063c-4794-8f6b-ea93890a4674] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 06:31:31.064489   13942 system_pods.go:61] "csi-hostpath-resizer-0" [54b009bd-6cc0-49e7-82a2-9f7cf160569b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 06:31:31.064500   13942 system_pods.go:61] "csi-hostpathplugin-lsgqp" [7794aa6e-993e-4625-8fe9-562208645794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 06:31:31.064508   13942 system_pods.go:61] "etcd-addons-368929" [fd2748fc-bfea-4a7f-891d-99077f8233bf] Running
	I0915 06:31:31.064514   13942 system_pods.go:61] "kube-apiserver-addons-368929" [8ecbb12d-50b4-4d33-be92-d1430dbb9b31] Running
	I0915 06:31:31.064522   13942 system_pods.go:61] "kube-controller-manager-addons-368929" [966825ec-c456-4f8d-bb17-345e7ea3f48c] Running
	I0915 06:31:31.064529   13942 system_pods.go:61] "kube-ingress-dns-minikube" [ba1fa65c-7021-4ddf-a816-9f840f28af7d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0915 06:31:31.064539   13942 system_pods.go:61] "kube-proxy-ldpsk" [a2b364d0-170c-491f-a76a-1a9aac8268d1] Running
	I0915 06:31:31.064543   13942 system_pods.go:61] "kube-scheduler-addons-368929" [02b92939-9320-46e0-8afd-1f22d86465db] Running
	I0915 06:31:31.064549   13942 system_pods.go:61] "metrics-server-84c5f94fbc-2pshh" [0443fc45-c95c-4fab-9dfe-a1b598ac6c8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 06:31:31.064555   13942 system_pods.go:61] "nvidia-device-plugin-daemonset-kl795" [d0981521-b267-4cf9-82e3-73ca27f55631] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0915 06:31:31.064560   13942 system_pods.go:61] "registry-66c9cd494c-hbp2b" [29e66421-b96f-416d-b126-9c3b0d11bc7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0915 06:31:31.064566   13942 system_pods.go:61] "registry-proxy-ncp27" [cb62ce46-b1f1-4fef-8ada-8ff4f0dc35ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 06:31:31.064574   13942 system_pods.go:61] "snapshot-controller-56fcc65765-gpfpd" [b21fd3c8-1828-47d4-8c9d-3281ea26cc2e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:31.064586   13942 system_pods.go:61] "snapshot-controller-56fcc65765-nj866" [364b2721-2e61-435f-b087-0c183c2e9c65] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:31.064592   13942 system_pods.go:61] "storage-provisioner" [bf2fb433-e07a-4c6e-8438-67625e0215a8] Running
	I0915 06:31:31.064604   13942 system_pods.go:61] "tiller-deploy-b48cc5f79-cw67q" [6012a392-8d4a-4d69-a877-31fa7f992089] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0915 06:31:31.064613   13942 system_pods.go:74] duration metric: took 139.600952ms to wait for pod list to return data ...
	I0915 06:31:31.064626   13942 default_sa.go:34] waiting for default service account to be created ...
	I0915 06:31:31.258650   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:31.259446   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:31.259836   13942 default_sa.go:45] found service account: "default"
	I0915 06:31:31.259856   13942 default_sa.go:55] duration metric: took 195.22286ms for default service account to be created ...
	I0915 06:31:31.259867   13942 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 06:31:31.300588   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:31.464010   13942 system_pods.go:86] 18 kube-system pods found
	I0915 06:31:31.464039   13942 system_pods.go:89] "coredns-7c65d6cfc9-d42kz" [df259178-5edc-4af0-97ba-206daeab8c29] Running
	I0915 06:31:31.464047   13942 system_pods.go:89] "csi-hostpath-attacher-0" [0adda2d4-063c-4794-8f6b-ea93890a4674] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 06:31:31.464055   13942 system_pods.go:89] "csi-hostpath-resizer-0" [54b009bd-6cc0-49e7-82a2-9f7cf160569b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 06:31:31.464062   13942 system_pods.go:89] "csi-hostpathplugin-lsgqp" [7794aa6e-993e-4625-8fe9-562208645794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 06:31:31.464067   13942 system_pods.go:89] "etcd-addons-368929" [fd2748fc-bfea-4a7f-891d-99077f8233bf] Running
	I0915 06:31:31.464072   13942 system_pods.go:89] "kube-apiserver-addons-368929" [8ecbb12d-50b4-4d33-be92-d1430dbb9b31] Running
	I0915 06:31:31.464079   13942 system_pods.go:89] "kube-controller-manager-addons-368929" [966825ec-c456-4f8d-bb17-345e7ea3f48c] Running
	I0915 06:31:31.464086   13942 system_pods.go:89] "kube-ingress-dns-minikube" [ba1fa65c-7021-4ddf-a816-9f840f28af7d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0915 06:31:31.464098   13942 system_pods.go:89] "kube-proxy-ldpsk" [a2b364d0-170c-491f-a76a-1a9aac8268d1] Running
	I0915 06:31:31.464106   13942 system_pods.go:89] "kube-scheduler-addons-368929" [02b92939-9320-46e0-8afd-1f22d86465db] Running
	I0915 06:31:31.464114   13942 system_pods.go:89] "metrics-server-84c5f94fbc-2pshh" [0443fc45-c95c-4fab-9dfe-a1b598ac6c8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 06:31:31.464127   13942 system_pods.go:89] "nvidia-device-plugin-daemonset-kl795" [d0981521-b267-4cf9-82e3-73ca27f55631] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0915 06:31:31.464136   13942 system_pods.go:89] "registry-66c9cd494c-hbp2b" [29e66421-b96f-416d-b126-9c3b0d11bc7f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0915 06:31:31.464145   13942 system_pods.go:89] "registry-proxy-ncp27" [cb62ce46-b1f1-4fef-8ada-8ff4f0dc35ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 06:31:31.464153   13942 system_pods.go:89] "snapshot-controller-56fcc65765-gpfpd" [b21fd3c8-1828-47d4-8c9d-3281ea26cc2e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:31.464161   13942 system_pods.go:89] "snapshot-controller-56fcc65765-nj866" [364b2721-2e61-435f-b087-0c183c2e9c65] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:31.464166   13942 system_pods.go:89] "storage-provisioner" [bf2fb433-e07a-4c6e-8438-67625e0215a8] Running
	I0915 06:31:31.464172   13942 system_pods.go:89] "tiller-deploy-b48cc5f79-cw67q" [6012a392-8d4a-4d69-a877-31fa7f992089] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0915 06:31:31.464181   13942 system_pods.go:126] duration metric: took 204.307671ms to wait for k8s-apps to be running ...
	I0915 06:31:31.464191   13942 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 06:31:31.464244   13942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:31:31.486956   13942 system_svc.go:56] duration metric: took 22.754715ms WaitForService to wait for kubelet
	I0915 06:31:31.486990   13942 kubeadm.go:582] duration metric: took 13.955789555s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:31:31.487013   13942 node_conditions.go:102] verifying NodePressure condition ...
	I0915 06:31:31.522077   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:31.659879   13942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 06:31:31.659920   13942 node_conditions.go:123] node cpu capacity is 2
	I0915 06:31:31.659934   13942 node_conditions.go:105] duration metric: took 172.914644ms to run NodePressure ...
	I0915 06:31:31.659947   13942 start.go:241] waiting for startup goroutines ...
	I0915 06:31:31.759750   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:31.760177   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:31.800755   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:32.021954   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:32.259791   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:32.260569   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:32.300924   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:32.522475   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:32.759438   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:32.759934   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:32.800621   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:33.172220   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:33.271906   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:33.272260   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:33.302687   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:33.522439   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:33.763498   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:33.764289   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:33.801429   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:34.023038   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:34.259772   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:34.260041   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:34.300561   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:34.521913   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:34.759623   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:34.759710   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:34.800723   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:35.021779   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:35.260351   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:35.260447   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:35.299779   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:35.521913   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:35.760515   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:35.760927   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:35.800167   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:36.022203   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:36.257726   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:36.259665   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:36.299888   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:36.522528   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:36.758673   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:36.760425   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:36.801181   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:37.022185   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:37.258988   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:37.259048   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:37.300658   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:37.522233   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:37.757443   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:37.758723   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:37.800691   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:38.022095   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:38.257419   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:38.259009   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:38.300410   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:38.522197   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:38.757617   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:38.759144   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:38.800893   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:39.022318   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:39.261103   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:39.261240   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:39.300803   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:39.521354   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:39.759863   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:39.760107   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:39.802301   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:40.022269   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:40.257834   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:40.262295   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:40.300771   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:40.522661   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:40.759261   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:40.759486   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:40.801798   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:41.021829   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:41.289792   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:41.289896   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:41.301063   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:41.521512   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:41.761098   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:41.761110   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:41.801396   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:42.416726   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:42.417219   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:42.417240   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:42.417651   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:42.522481   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:42.760002   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:42.760206   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:42.801257   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:43.022267   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:43.257969   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:43.260312   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:43.304149   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:43.522666   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:43.759718   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:43.761579   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:43.800010   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:44.021599   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:44.258922   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:44.259066   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:44.300086   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:44.521602   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:44.758715   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:44.759687   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:44.801888   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:45.022545   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:45.258928   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:45.260028   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:45.300426   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:45.522347   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:45.757677   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:45.759429   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:45.801059   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:46.023666   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:46.259131   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:46.259319   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:46.301039   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:46.521574   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:46.758246   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:46.759289   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:46.800758   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:47.022872   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:47.700346   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:47.701683   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:47.701903   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:47.702433   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:47.759173   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:47.759895   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:47.861235   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:48.021603   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:48.259458   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:48.259485   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:48.300707   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:48.522271   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:48.762907   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:48.763255   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:48.800498   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:49.022348   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:49.257789   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:49.258990   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:49.300932   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:49.521296   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:49.759707   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:49.760030   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:49.801156   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:50.021582   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:50.259593   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:50.259614   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:50.300101   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:50.522458   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:50.758309   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:50.759307   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:50.801005   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:51.021667   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:51.258800   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:51.259754   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:51.300360   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:51.522137   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:51.916983   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:51.918391   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:51.918708   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:52.022345   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:52.257769   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:52.259200   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:52.300612   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:52.522624   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:52.759128   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:52.760003   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:52.800696   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:53.022034   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:53.258260   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:53.259030   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:53.299898   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:53.522948   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:53.758046   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:53.759190   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:53.801909   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:54.022611   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:54.258314   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:54.259394   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:54.299868   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:54.522225   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:54.759462   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:54.759954   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:54.800966   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:55.021560   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:55.259668   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:55.260096   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:55.300543   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:55.522930   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:55.759164   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:55.759630   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:55.800281   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:56.023274   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:56.258687   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:56.258983   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:56.300450   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:56.521941   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:56.758690   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:56.759184   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:56.800444   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:57.022085   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:57.328096   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:57.328128   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:57.328468   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:57.522064   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:57.758754   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:57.761358   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:57.801386   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:58.022197   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:58.259116   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:58.259355   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:58.301472   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:58.522238   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:58.757647   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:58.759138   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:58.800143   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:59.021428   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:59.259139   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:59.259914   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:59.300195   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:59.521969   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:59.757634   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:59.759388   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:59.801766   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:00.310029   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:00.310485   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:00.310541   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:00.310734   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:00.522275   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:00.757676   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:00.759851   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:00.800259   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:01.022105   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.263670   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:01.264256   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:01.363605   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:01.522274   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.758855   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:01.759192   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:01.800380   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:02.022392   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:02.258770   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:02.258779   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:02.300507   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:02.523063   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:02.757767   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:02.759609   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:02.800172   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:03.024853   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.258447   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:03.260135   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:03.301456   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:03.521270   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.759277   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:03.759579   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:03.859786   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:04.023200   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:04.259308   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:04.259454   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:04.302238   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:04.524167   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:04.759036   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:04.759483   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:04.800855   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:05.022461   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:05.257848   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:05.259070   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:05.300542   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:05.522141   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:05.757343   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:05.759078   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:05.800257   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:06.021588   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:06.259151   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:06.259229   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:06.301635   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:06.522501   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:06.760161   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:06.760475   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:06.800547   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:07.022162   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:07.260554   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:07.260733   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:07.300362   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:07.524441   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:07.757879   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:07.759690   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:07.799841   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:08.022590   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:08.258771   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:08.261346   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:08.300492   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:08.521937   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:08.760065   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:08.760608   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:08.800923   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:09.023054   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:09.258254   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:09.261196   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:09.303211   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:09.521992   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:09.759542   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:09.759968   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:09.800419   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:10.022241   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:10.257256   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:10.259665   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:10.301095   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:10.522381   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:10.758339   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:10.760016   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:10.800973   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:11.022131   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:11.257766   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:11.259848   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:11.300515   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:11.522584   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:11.759504   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:11.759819   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:11.800734   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:12.022702   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:12.259127   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:12.259205   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:12.301248   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:12.522307   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:12.759373   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:12.759784   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:12.800790   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:13.022473   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:13.258088   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:13.259484   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:13.301523   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:13.522640   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:13.760074   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:13.760590   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:13.861156   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:14.021516   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:14.259488   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:14.259642   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:14.300721   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:14.522807   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:14.778229   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:14.779139   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:14.873475   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:15.022821   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:15.259680   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:15.259809   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:15.300641   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:15.521637   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:15.758806   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:15.759633   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:15.800222   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:16.021553   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:16.259499   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:16.259517   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:16.299855   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:16.522762   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:16.759439   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:16.759858   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:16.800448   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:17.022916   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:17.269753   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:17.273875   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:17.311380   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:17.521792   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:17.757420   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:17.760061   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:17.800763   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:18.022671   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:18.260927   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:18.261314   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:18.360499   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:18.522431   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:18.758039   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:18.759972   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:18.800762   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:19.021770   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:19.258785   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:19.258915   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:19.300433   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:19.522477   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:19.758545   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:19.758909   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:19.799951   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:20.021583   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:20.258286   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:20.259349   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:20.300404   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:20.522162   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:20.757244   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:20.760035   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:20.800381   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:21.022666   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:21.259375   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:21.259813   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:21.299671   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:21.522782   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:21.759071   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:21.759579   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:21.801715   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:22.022489   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:22.258632   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:32:22.258786   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:22.301546   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:22.521535   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:22.757652   13942 kapi.go:107] duration metric: took 56.503752424s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 06:32:22.759703   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:22.800194   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:23.021373   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:23.259556   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:23.300956   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:23.522488   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:23.759651   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:23.950468   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:24.021780   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:24.259077   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:24.300587   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:24.522714   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:24.759126   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:24.801761   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:25.021962   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:25.258702   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:25.300610   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:25.527977   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:25.758500   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:25.801128   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:26.024917   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:26.258889   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:26.300533   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:26.531719   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:26.760215   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:26.861604   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:27.022469   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:27.259796   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:27.301694   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:27.522577   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:27.759608   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:27.799769   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:28.022221   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:28.260134   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:28.362251   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:28.522884   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:28.758529   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:28.800998   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:29.021597   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:29.260071   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:29.300411   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:29.521843   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:29.759942   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:29.808216   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:30.025869   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:30.258745   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:30.300960   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:30.526667   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:30.761078   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:30.808613   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:31.023050   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:31.258854   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:31.300480   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:31.522174   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:31.761507   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:31.800897   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:32.022757   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:32.261197   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:32.301193   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:32.522071   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:32.762443   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:32.801404   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:33.021999   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:33.260491   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:33.300695   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:33.525170   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:33.769640   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:33.868134   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:34.022189   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:34.260688   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:34.360810   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:34.525722   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:34.766523   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:34.805396   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:35.030161   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:35.258936   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:35.300824   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:35.522082   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:35.758581   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:35.801492   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:36.021288   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:36.259323   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:36.300415   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:36.522271   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:36.761188   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:36.800799   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:37.022023   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:37.262566   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:37.300820   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:37.522925   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:37.758831   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:37.799987   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:38.022158   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:38.260608   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:38.362196   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:38.521238   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:38.999060   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:38.999332   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:39.100770   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:39.267733   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:39.304254   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:39.527622   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:39.759148   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:39.801011   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:40.023997   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:40.258867   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:40.301651   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:40.521565   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:40.759515   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:40.800939   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:41.022706   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:41.259458   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:41.301688   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:41.806497   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:41.811813   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:41.812222   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:42.023382   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:42.267386   13942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:42.367885   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:42.525013   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:42.759644   13942 kapi.go:107] duration metric: took 1m16.504925037s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 06:32:42.800316   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:43.022950   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:43.300696   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:43.521739   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:43.802846   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:44.022227   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:44.300361   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:44.522479   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:44.802449   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:45.022566   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:45.300843   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:45.522072   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:45.800593   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:46.022008   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:46.301212   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:46.521319   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:46.800712   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:47.022599   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:47.301146   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:47.522228   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:47.801980   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:48.021550   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:48.301089   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:48.521254   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:48.802057   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:49.022313   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:49.307681   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:49.522886   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:49.803712   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:50.022128   13942 kapi.go:107] duration metric: took 1m20.003852984s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 06:32:50.023467   13942 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-368929 cluster.
	I0915 06:32:50.024716   13942 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 06:32:50.025878   13942 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 06:32:50.304584   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:50.803369   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:51.300707   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:51.801178   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:52.301423   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:52.801624   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:53.532327   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:53.810592   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:54.301743   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:54.800975   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:55.300394   13942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:32:55.800085   13942 kapi.go:107] duration metric: took 1m27.004147412s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 06:32:55.802070   13942 out.go:177] * Enabled addons: default-storageclass, ingress-dns, storage-provisioner, nvidia-device-plugin, helm-tiller, cloud-spanner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0915 06:32:55.803500   13942 addons.go:510] duration metric: took 1m38.272362908s for enable addons: enabled=[default-storageclass ingress-dns storage-provisioner nvidia-device-plugin helm-tiller cloud-spanner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0915 06:32:55.803536   13942 start.go:246] waiting for cluster config update ...
	I0915 06:32:55.803553   13942 start.go:255] writing updated cluster config ...
	I0915 06:32:55.803803   13942 ssh_runner.go:195] Run: rm -f paused
	I0915 06:32:55.854452   13942 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 06:32:55.856106   13942 out.go:177] * Done! kubectl is now configured to use "addons-368929" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.355750034Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc3de013-96ae-4637-8b09-f2ad013434ee name=/runtime.v1.RuntimeService/Version
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.357235823Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a8bd00c-aa7c-4eb2-874d-d04e4ece6668 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.358425538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382829358398877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a8bd00c-aa7c-4eb2-874d-d04e4ece6668 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.359074841Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e56a50ca-94cd-43f1-9360-69b2cb250b56 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.359152151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e56a50ca-94cd-43f1-9360-69b2cb250b56 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.359380136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a998d8313e4f4b8762abf2436fe33923f8d0e8a3f48bf7e57874970e79a2f66,PodSandboxId:02a0dd44ead5240dab2e32893461095a2b4ff513c331788c3ef3b69a1c50782e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726382662510449826,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hbbg7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b923ad3-faa7-4dfa-8a1a-08ec7a851fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c6d745c3b5a950a7e3fbd43c2b7cfac54fea1278eba98b4dc91d0c78dc22af,PodSandboxId:56736db040b57431c2733303cf35ebe8f1fc747d2ab54ad0fa8f6dac00f4ba5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726382521302371284,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03db1b25-54f4-4882-85e5-a3edf2b37fd6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73,PodSandboxId:4805b7ff0a6b13511a7dffc70f7fc5eabcad01548e746b45625d996cf9586d5f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726381968716024437,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-g2rmd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c6416c0e-a192-4454-a335-5c49f36ea19b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e762ef5d36b86f59e2d6a45c0c177b5566d12ec93c6b51499fcff97bb652b874,PodSandboxId:5465db13b3322a19437892b4a03612b20a842b3e0d1583d21bda911dad3ff0b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726381907780581831,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2pshh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0443fc45-c95c-4fab-9dfe-a1b598ac6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2,PodSandboxId:14b4ae1ab9f1b40837daf07fd350cd8061ba56ec89554f5df3bf853ac9b4cc99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726381884397299457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2fb433-e07a-4c6e-8438-67625e0215a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e,PodSandboxId:b19df699e240af6c1f7248d381f4f7e7d3d3dabb3197c6c7f926b2633c740eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726381882
734885829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d42kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df259178-5edc-4af0-97ba-206daeab8c29,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c,PodSandboxId:3090e56371ab7a7e97c0234d1b573ad0d6e27181cd3cda91d653a71adeffcb6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726381879932666993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldpsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b364d0-170c-491f-a76a-1a9aac8268d1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da,PodSandboxId:d1a7384c192cbd7bd1bb33d7c4dce95c22863640a4af9f472c6eed350f797cb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726381867165201460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fabc84c41d45bf0e50c614cf9d14b6d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2,PodSandboxId:c91b2b79714717505cb7b61f0859fccc03a24fdaf9deba663b2fffc57f8ca95b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726381867173545366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5080c7122a700b4220ec74fbddc5b38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a,PodSandboxId:ddbb5486a2f5fb850ea89d0599da3f76bb5f29363310953da92d44468b188272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726381867159171775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3109ca0208e4bc37d3c2b041acc81270,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070,PodSandboxId:801081b18db2cd25819255b4e10919986e6dae4caee2ad1d6838680996adcaf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726381867152114781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1518c4b384a8cdb6b825f3767bc485,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e56a50ca-94cd-43f1-9360-69b2cb250b56 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.396265609Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51b0a477-4c69-434e-9af1-4b61dbc3421b name=/runtime.v1.RuntimeService/Version
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.396349781Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51b0a477-4c69-434e-9af1-4b61dbc3421b name=/runtime.v1.RuntimeService/Version
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.397911820Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=886058a2-ceb3-48f8-aa15-092047425ecc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.399066826Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382829399041269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=886058a2-ceb3-48f8-aa15-092047425ecc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.399764164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c31f2fe-ca39-4f3f-97c0-ab635322f704 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.399817203Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c31f2fe-ca39-4f3f-97c0-ab635322f704 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.400090795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a998d8313e4f4b8762abf2436fe33923f8d0e8a3f48bf7e57874970e79a2f66,PodSandboxId:02a0dd44ead5240dab2e32893461095a2b4ff513c331788c3ef3b69a1c50782e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726382662510449826,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hbbg7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b923ad3-faa7-4dfa-8a1a-08ec7a851fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c6d745c3b5a950a7e3fbd43c2b7cfac54fea1278eba98b4dc91d0c78dc22af,PodSandboxId:56736db040b57431c2733303cf35ebe8f1fc747d2ab54ad0fa8f6dac00f4ba5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726382521302371284,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03db1b25-54f4-4882-85e5-a3edf2b37fd6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73,PodSandboxId:4805b7ff0a6b13511a7dffc70f7fc5eabcad01548e746b45625d996cf9586d5f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726381968716024437,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-g2rmd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c6416c0e-a192-4454-a335-5c49f36ea19b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e762ef5d36b86f59e2d6a45c0c177b5566d12ec93c6b51499fcff97bb652b874,PodSandboxId:5465db13b3322a19437892b4a03612b20a842b3e0d1583d21bda911dad3ff0b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726381907780581831,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2pshh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0443fc45-c95c-4fab-9dfe-a1b598ac6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2,PodSandboxId:14b4ae1ab9f1b40837daf07fd350cd8061ba56ec89554f5df3bf853ac9b4cc99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726381884397299457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2fb433-e07a-4c6e-8438-67625e0215a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e,PodSandboxId:b19df699e240af6c1f7248d381f4f7e7d3d3dabb3197c6c7f926b2633c740eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726381882
734885829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d42kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df259178-5edc-4af0-97ba-206daeab8c29,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c,PodSandboxId:3090e56371ab7a7e97c0234d1b573ad0d6e27181cd3cda91d653a71adeffcb6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726381879932666993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldpsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b364d0-170c-491f-a76a-1a9aac8268d1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da,PodSandboxId:d1a7384c192cbd7bd1bb33d7c4dce95c22863640a4af9f472c6eed350f797cb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726381867165201460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fabc84c41d45bf0e50c614cf9d14b6d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2,PodSandboxId:c91b2b79714717505cb7b61f0859fccc03a24fdaf9deba663b2fffc57f8ca95b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726381867173545366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5080c7122a700b4220ec74fbddc5b38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a,PodSandboxId:ddbb5486a2f5fb850ea89d0599da3f76bb5f29363310953da92d44468b188272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726381867159171775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3109ca0208e4bc37d3c2b041acc81270,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070,PodSandboxId:801081b18db2cd25819255b4e10919986e6dae4caee2ad1d6838680996adcaf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726381867152114781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1518c4b384a8cdb6b825f3767bc485,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c31f2fe-ca39-4f3f-97c0-ab635322f704 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.441739336Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d908164-1416-45c3-84d0-1d0d6565c5c8 name=/runtime.v1.RuntimeService/Version
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.441857665Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d908164-1416-45c3-84d0-1d0d6565c5c8 name=/runtime.v1.RuntimeService/Version
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.443046373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1770819c-84d4-4f93-a8f4-f7c7a8ff3c42 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.444340053Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382829444315361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1770819c-84d4-4f93-a8f4-f7c7a8ff3c42 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.445172036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bffef13-33b7-483b-86cf-37f44bd4b487 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.445243175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bffef13-33b7-483b-86cf-37f44bd4b487 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.445475094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a998d8313e4f4b8762abf2436fe33923f8d0e8a3f48bf7e57874970e79a2f66,PodSandboxId:02a0dd44ead5240dab2e32893461095a2b4ff513c331788c3ef3b69a1c50782e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726382662510449826,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hbbg7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b923ad3-faa7-4dfa-8a1a-08ec7a851fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c6d745c3b5a950a7e3fbd43c2b7cfac54fea1278eba98b4dc91d0c78dc22af,PodSandboxId:56736db040b57431c2733303cf35ebe8f1fc747d2ab54ad0fa8f6dac00f4ba5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726382521302371284,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03db1b25-54f4-4882-85e5-a3edf2b37fd6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73,PodSandboxId:4805b7ff0a6b13511a7dffc70f7fc5eabcad01548e746b45625d996cf9586d5f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726381968716024437,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-g2rmd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c6416c0e-a192-4454-a335-5c49f36ea19b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e762ef5d36b86f59e2d6a45c0c177b5566d12ec93c6b51499fcff97bb652b874,PodSandboxId:5465db13b3322a19437892b4a03612b20a842b3e0d1583d21bda911dad3ff0b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726381907780581831,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2pshh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0443fc45-c95c-4fab-9dfe-a1b598ac6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2,PodSandboxId:14b4ae1ab9f1b40837daf07fd350cd8061ba56ec89554f5df3bf853ac9b4cc99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726381884397299457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2fb433-e07a-4c6e-8438-67625e0215a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e,PodSandboxId:b19df699e240af6c1f7248d381f4f7e7d3d3dabb3197c6c7f926b2633c740eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726381882
734885829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d42kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df259178-5edc-4af0-97ba-206daeab8c29,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c,PodSandboxId:3090e56371ab7a7e97c0234d1b573ad0d6e27181cd3cda91d653a71adeffcb6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726381879932666993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldpsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b364d0-170c-491f-a76a-1a9aac8268d1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da,PodSandboxId:d1a7384c192cbd7bd1bb33d7c4dce95c22863640a4af9f472c6eed350f797cb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726381867165201460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fabc84c41d45bf0e50c614cf9d14b6d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2,PodSandboxId:c91b2b79714717505cb7b61f0859fccc03a24fdaf9deba663b2fffc57f8ca95b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726381867173545366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5080c7122a700b4220ec74fbddc5b38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a,PodSandboxId:ddbb5486a2f5fb850ea89d0599da3f76bb5f29363310953da92d44468b188272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726381867159171775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3109ca0208e4bc37d3c2b041acc81270,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070,PodSandboxId:801081b18db2cd25819255b4e10919986e6dae4caee2ad1d6838680996adcaf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726381867152114781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1518c4b384a8cdb6b825f3767bc485,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bffef13-33b7-483b-86cf-37f44bd4b487 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.447263732Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=e764c520-9678-4aae-a659-01ae19797096 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.447504121Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:02a0dd44ead5240dab2e32893461095a2b4ff513c331788c3ef3b69a1c50782e,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-hbbg7,Uid:2b923ad3-faa7-4dfa-8a1a-08ec7a851fbf,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726382659750457323,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hbbg7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b923ad3-faa7-4dfa-8a1a-08ec7a851fbf,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:44:19.429453665Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:56736db040b57431c2733303cf35ebe8f1fc747d2ab54ad0fa8f6dac00f4ba5c,Metadata:&PodSandboxMetadata{Name:nginx,Uid:03db1b25-54f4-4882-85e5-a3edf2b37fd6,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1726382517189827059,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03db1b25-54f4-4882-85e5-a3edf2b37fd6,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:41:56.878855144Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3388b9e9e094819217462ba2ce99223800afe3ddc99b45691de252d6392e37ed,Metadata:&PodSandboxMetadata{Name:busybox,Uid:c8076028-6672-48b6-8085-14b06a0a0268,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726381976437417438,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c8076028-6672-48b6-8085-14b06a0a0268,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:32:56.125683406Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4805b7ff0a6b13511a
7dffc70f7fc5eabcad01548e746b45625d996cf9586d5f,Metadata:&PodSandboxMetadata{Name:gcp-auth-89d5ffd79-g2rmd,Uid:c6416c0e-a192-4454-a335-5c49f36ea19b,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726381954130415453,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-89d5ffd79-g2rmd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c6416c0e-a192-4454-a335-5c49f36ea19b,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 89d5ffd79,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:31:29.918924866Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5465db13b3322a19437892b4a03612b20a842b3e0d1583d21bda911dad3ff0b4,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-2pshh,Uid:0443fc45-c95c-4fab-9dfe-a1b598ac6c8b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726381883764675377,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-se
rver-84c5f94fbc-2pshh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0443fc45-c95c-4fab-9dfe-a1b598ac6c8b,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:31:23.154315730Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:14b4ae1ab9f1b40837daf07fd350cd8061ba56ec89554f5df3bf853ac9b4cc99,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:bf2fb433-e07a-4c6e-8438-67625e0215a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726381882770415689,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2fb433-e07a-4c6e-8438-67625e0215a8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"lab
els\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-15T06:31:22.399864547Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3090e56371ab7a7e97c0234d1b573ad0d6e27181cd3cda91d653a71adeffcb6a,Metadata:&PodSandboxMetadata{Name:kube-proxy-ldpsk,Uid:a2b364d0-170c-491f-a76a-1a9aac8268d1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726381879429034715,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: kube-proxy-ldpsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b364d0-170c-491f-a76a-1a9aac8268d1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:31:17.921247860Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b19df699e240af6c1f7248d381f4f7e7d3d3dabb3197c6c7f926b2633c740eba,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-d42kz,Uid:df259178-5edc-4af0-97ba-206daeab8c29,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726381879304189424,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-d42kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df259178-5edc-4af0-97ba-206daeab8c29,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:31:18.089325646Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodS
andbox{Id:ddbb5486a2f5fb850ea89d0599da3f76bb5f29363310953da92d44468b188272,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-368929,Uid:3109ca0208e4bc37d3c2b041acc81270,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726381866924357896,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3109ca0208e4bc37d3c2b041acc81270,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3109ca0208e4bc37d3c2b041acc81270,kubernetes.io/config.seen: 2024-09-15T06:31:06.428919569Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d1a7384c192cbd7bd1bb33d7c4dce95c22863640a4af9f472c6eed350f797cb1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-368929,Uid:fabc84c41d45bf0e50c614cf9d14b6d5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726381866909225665,Labels:map[stri
ng]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fabc84c41d45bf0e50c614cf9d14b6d5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fabc84c41d45bf0e50c614cf9d14b6d5,kubernetes.io/config.seen: 2024-09-15T06:31:06.428920438Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c91b2b79714717505cb7b61f0859fccc03a24fdaf9deba663b2fffc57f8ca95b,Metadata:&PodSandboxMetadata{Name:etcd-addons-368929,Uid:e5080c7122a700b4220ec74fbddc5b38,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726381866893459739,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5080c7122a700b4220ec74fbddc5b38,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://1
92.168.39.212:2379,kubernetes.io/config.hash: e5080c7122a700b4220ec74fbddc5b38,kubernetes.io/config.seen: 2024-09-15T06:31:06.428914083Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:801081b18db2cd25819255b4e10919986e6dae4caee2ad1d6838680996adcaf6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-368929,Uid:5d1518c4b384a8cdb6b825f3767bc485,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726381866892185987,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1518c4b384a8cdb6b825f3767bc485,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.212:8443,kubernetes.io/config.hash: 5d1518c4b384a8cdb6b825f3767bc485,kubernetes.io/config.seen: 2024-09-15T06:31:06.428918058Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector
/interceptors.go:74" id=e764c520-9678-4aae-a659-01ae19797096 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.448354029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5698b5d5-2184-4930-a123-3146ad8858dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.448417502Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5698b5d5-2184-4930-a123-3146ad8858dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 06:47:09 addons-368929 crio[662]: time="2024-09-15 06:47:09.448643398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a998d8313e4f4b8762abf2436fe33923f8d0e8a3f48bf7e57874970e79a2f66,PodSandboxId:02a0dd44ead5240dab2e32893461095a2b4ff513c331788c3ef3b69a1c50782e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726382662510449826,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hbbg7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b923ad3-faa7-4dfa-8a1a-08ec7a851fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c6d745c3b5a950a7e3fbd43c2b7cfac54fea1278eba98b4dc91d0c78dc22af,PodSandboxId:56736db040b57431c2733303cf35ebe8f1fc747d2ab54ad0fa8f6dac00f4ba5c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726382521302371284,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03db1b25-54f4-4882-85e5-a3edf2b37fd6,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73,PodSandboxId:4805b7ff0a6b13511a7dffc70f7fc5eabcad01548e746b45625d996cf9586d5f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726381968716024437,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-g2rmd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: c6416c0e-a192-4454-a335-5c49f36ea19b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e762ef5d36b86f59e2d6a45c0c177b5566d12ec93c6b51499fcff97bb652b874,PodSandboxId:5465db13b3322a19437892b4a03612b20a842b3e0d1583d21bda911dad3ff0b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726381907780581831,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2pshh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0443fc45-c95c-4fab-9dfe-a1b598ac6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2,PodSandboxId:14b4ae1ab9f1b40837daf07fd350cd8061ba56ec89554f5df3bf853ac9b4cc99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726381884397299457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2fb433-e07a-4c6e-8438-67625e0215a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e,PodSandboxId:b19df699e240af6c1f7248d381f4f7e7d3d3dabb3197c6c7f926b2633c740eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726381882
734885829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d42kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df259178-5edc-4af0-97ba-206daeab8c29,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c,PodSandboxId:3090e56371ab7a7e97c0234d1b573ad0d6e27181cd3cda91d653a71adeffcb6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726381879932666993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldpsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b364d0-170c-491f-a76a-1a9aac8268d1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da,PodSandboxId:d1a7384c192cbd7bd1bb33d7c4dce95c22863640a4af9f472c6eed350f797cb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726381867165201460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fabc84c41d45bf0e50c614cf9d14b6d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2,PodSandboxId:c91b2b79714717505cb7b61f0859fccc03a24fdaf9deba663b2fffc57f8ca95b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726381867173545366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5080c7122a700b4220ec74fbddc5b38,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a,PodSandboxId:ddbb5486a2f5fb850ea89d0599da3f76bb5f29363310953da92d44468b188272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726381867159171775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3109ca0208e4bc37d3c2b041acc81270,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070,PodSandboxId:801081b18db2cd25819255b4e10919986e6dae4caee2ad1d6838680996adcaf6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726381867152114781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-368929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d1518c4b384a8cdb6b825f3767bc485,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5698b5d5-2184-4930-a123-3146ad8858dd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5a998d8313e4f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   02a0dd44ead52       hello-world-app-55bf9c44b4-hbbg7
	00c6d745c3b5a       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         5 minutes ago       Running             nginx                     0                   56736db040b57       nginx
	af20c2eee64f4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            14 minutes ago      Running             gcp-auth                  0                   4805b7ff0a6b1       gcp-auth-89d5ffd79-g2rmd
	e762ef5d36b86       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   15 minutes ago      Running             metrics-server            0                   5465db13b3322       metrics-server-84c5f94fbc-2pshh
	522296a807289       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   14b4ae1ab9f1b       storage-provisioner
	0eaf92b0ac4cf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago      Running             coredns                   0                   b19df699e240a       coredns-7c65d6cfc9-d42kz
	f44a755ad6406       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago      Running             kube-proxy                0                   3090e56371ab7       kube-proxy-ldpsk
	2d2c642ca90bf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        16 minutes ago      Running             etcd                      0                   c91b2b7971471       etcd-addons-368929
	5278a91f04afe       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        16 minutes ago      Running             kube-scheduler            0                   d1a7384c192cb       kube-scheduler-addons-368929
	66eb2bd2d4313       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        16 minutes ago      Running             kube-controller-manager   0                   ddbb5486a2f5f       kube-controller-manager-addons-368929
	0f00b1281db41       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        16 minutes ago      Running             kube-apiserver            0                   801081b18db2c       kube-apiserver-addons-368929
	
	
	==> coredns [0eaf92b0ac4cfe01f22e419f953e1ab759e19ff88def572db560c90f1f42ba0e] <==
	[INFO] 10.244.0.7:60872 - 2697 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000372412s
	[INFO] 10.244.0.7:54481 - 63880 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000200669s
	[INFO] 10.244.0.7:54481 - 36493 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00014208s
	[INFO] 10.244.0.7:58760 - 443 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090787s
	[INFO] 10.244.0.7:58760 - 23481 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088177s
	[INFO] 10.244.0.7:48535 - 47705 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000271192s
	[INFO] 10.244.0.7:48535 - 54567 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083036s
	[INFO] 10.244.0.7:42330 - 4731 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138141s
	[INFO] 10.244.0.7:42330 - 6517 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000092133s
	[INFO] 10.244.0.7:47964 - 26953 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000283443s
	[INFO] 10.244.0.7:47964 - 19270 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142949s
	[INFO] 10.244.0.7:49955 - 21487 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137257s
	[INFO] 10.244.0.7:49955 - 61676 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095206s
	[INFO] 10.244.0.7:38355 - 23195 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000252309s
	[INFO] 10.244.0.7:38355 - 62100 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060261s
	[INFO] 10.244.0.7:43701 - 7554 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000161529s
	[INFO] 10.244.0.7:43701 - 65420 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000048462s
	[INFO] 10.244.0.22:50845 - 48293 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000496971s
	[INFO] 10.244.0.22:56694 - 7666 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000106022s
	[INFO] 10.244.0.22:53136 - 48746 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122296s
	[INFO] 10.244.0.22:43399 - 31030 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149247s
	[INFO] 10.244.0.22:48872 - 36794 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000141706s
	[INFO] 10.244.0.22:38135 - 52360 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121673s
	[INFO] 10.244.0.22:39775 - 36027 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000834966s
	[INFO] 10.244.0.22:40967 - 58177 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.00127761s
	
	
	==> describe nodes <==
	Name:               addons-368929
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-368929
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=addons-368929
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_31_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-368929
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:31:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-368929
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:47:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:44:46 +0000   Sun, 15 Sep 2024 06:31:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:44:46 +0000   Sun, 15 Sep 2024 06:31:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:44:46 +0000   Sun, 15 Sep 2024 06:31:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:44:46 +0000   Sun, 15 Sep 2024 06:31:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    addons-368929
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6b3f2f71dbb42e29461dbb3bd421d93
	  System UUID:                a6b3f2f7-1dbb-42e2-9461-dbb3bd421d93
	  Boot ID:                    da80a0da-5697-4701-b6a4-39271e495e6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-hbbg7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  gcp-auth                    gcp-auth-89d5ffd79-g2rmd                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-d42kz                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-368929                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-368929             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-368929    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-ldpsk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-368929             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node addons-368929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node addons-368929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node addons-368929 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m   kubelet          Node addons-368929 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node addons-368929 event: Registered Node addons-368929 in Controller
	
	
	==> dmesg <==
	[  +6.450745] kauditd_printk_skb: 66 callbacks suppressed
	[ +17.481041] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.378291] kauditd_printk_skb: 32 callbacks suppressed
	[Sep15 06:32] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.107181] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.597636] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.061750] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.516101] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.544554] kauditd_printk_skb: 47 callbacks suppressed
	[Sep15 06:34] kauditd_printk_skb: 28 callbacks suppressed
	[Sep15 06:35] kauditd_printk_skb: 28 callbacks suppressed
	[Sep15 06:38] kauditd_printk_skb: 28 callbacks suppressed
	[Sep15 06:40] kauditd_printk_skb: 28 callbacks suppressed
	[Sep15 06:41] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.724123] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.326898] kauditd_printk_skb: 10 callbacks suppressed
	[  +8.150758] kauditd_printk_skb: 46 callbacks suppressed
	[  +8.170813] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.523089] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.886920] kauditd_printk_skb: 23 callbacks suppressed
	[  +8.541296] kauditd_printk_skb: 33 callbacks suppressed
	[Sep15 06:42] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.212644] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.696206] kauditd_printk_skb: 32 callbacks suppressed
	[Sep15 06:44] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [2d2c642ca90bf40feb7d14c68e548aa28578af3d9a380454fc7f7076f45ae1b2] <==
	{"level":"info","ts":"2024-09-15T06:32:38.970093Z","caller":"traceutil/trace.go:171","msg":"trace[235109500] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1081; }","duration":"192.218119ms","start":"2024-09-15T06:32:38.777867Z","end":"2024-09-15T06:32:38.970085Z","steps":["trace[235109500] 'agreement among raft nodes before linearized reading'  (duration: 192.18886ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:32:41.779440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.051742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:32:41.782084Z","caller":"traceutil/trace.go:171","msg":"trace[1975915837] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1088; }","duration":"281.712677ms","start":"2024-09-15T06:32:41.500363Z","end":"2024-09-15T06:32:41.782076Z","steps":["trace[1975915837] 'range keys from in-memory index tree'  (duration: 279.004355ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:32:41.781563Z","caller":"traceutil/trace.go:171","msg":"trace[25339734] linearizableReadLoop","detail":"{readStateIndex:1122; appliedIndex:1121; }","duration":"133.379421ms","start":"2024-09-15T06:32:41.648165Z","end":"2024-09-15T06:32:41.781545Z","steps":["trace[25339734] 'read index received'  (duration: 126.703432ms)","trace[25339734] 'applied index is now lower than readState.Index'  (duration: 6.675186ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:32:41.781804Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.625248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-09-15T06:32:41.781933Z","caller":"traceutil/trace.go:171","msg":"trace[1390778342] transaction","detail":"{read_only:false; response_revision:1089; number_of_response:1; }","duration":"155.371511ms","start":"2024-09-15T06:32:41.626549Z","end":"2024-09-15T06:32:41.781921Z","steps":["trace[1390778342] 'process raft request'  (duration: 148.372455ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:32:41.783606Z","caller":"traceutil/trace.go:171","msg":"trace[325349942] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1089; }","duration":"135.434363ms","start":"2024-09-15T06:32:41.648161Z","end":"2024-09-15T06:32:41.783595Z","steps":["trace[325349942] 'agreement among raft nodes before linearized reading'  (duration: 133.430374ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:32:41.783629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.612501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:32:41.783886Z","caller":"traceutil/trace.go:171","msg":"trace[808376363] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1089; }","duration":"100.870711ms","start":"2024-09-15T06:32:41.683006Z","end":"2024-09-15T06:32:41.783876Z","steps":["trace[808376363] 'agreement among raft nodes before linearized reading'  (duration: 100.588865ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:32:41.786261Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.155485ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.212\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-09-15T06:32:41.787603Z","caller":"traceutil/trace.go:171","msg":"trace[32333181] range","detail":"{range_begin:/registry/masterleases/192.168.39.212; range_end:; response_count:1; response_revision:1089; }","duration":"104.495783ms","start":"2024-09-15T06:32:41.683094Z","end":"2024-09-15T06:32:41.787590Z","steps":["trace[32333181] 'agreement among raft nodes before linearized reading'  (duration: 103.092831ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:32:53.500272Z","caller":"traceutil/trace.go:171","msg":"trace[423926321] linearizableReadLoop","detail":"{readStateIndex:1189; appliedIndex:1188; }","duration":"222.338479ms","start":"2024-09-15T06:32:53.277918Z","end":"2024-09-15T06:32:53.500256Z","steps":["trace[423926321] 'read index received'  (duration: 222.103394ms)","trace[423926321] 'applied index is now lower than readState.Index'  (duration: 234.479µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:32:53.500508Z","caller":"traceutil/trace.go:171","msg":"trace[865342865] transaction","detail":"{read_only:false; response_revision:1153; number_of_response:1; }","duration":"458.921949ms","start":"2024-09-15T06:32:53.041572Z","end":"2024-09-15T06:32:53.500494Z","steps":["trace[865342865] 'process raft request'  (duration: 458.504383ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:32:53.500612Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-15T06:32:53.041556Z","time spent":"459.003891ms","remote":"127.0.0.1:38690","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1144 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-15T06:32:53.500512Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.59145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:32:53.500848Z","caller":"traceutil/trace.go:171","msg":"trace[2102283515] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1153; }","duration":"222.946871ms","start":"2024-09-15T06:32:53.277893Z","end":"2024-09-15T06:32:53.500839Z","steps":["trace[2102283515] 'agreement among raft nodes before linearized reading'  (duration: 222.546912ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:41:08.543557Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1522}
	{"level":"info","ts":"2024-09-15T06:41:08.573071Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1522,"took":"28.970485ms","hash":4115302871,"current-db-size-bytes":6864896,"current-db-size":"6.9 MB","current-db-size-in-use-bytes":3567616,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-15T06:41:08.573176Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4115302871,"revision":1522,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-15T06:41:20.310795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.008412ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:41:20.310906Z","caller":"traceutil/trace.go:171","msg":"trace[1098128179] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2119; }","duration":"220.206928ms","start":"2024-09-15T06:41:20.090684Z","end":"2024-09-15T06:41:20.310891Z","steps":["trace[1098128179] 'range keys from in-memory index tree'  (duration: 219.905648ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:42:34.551551Z","caller":"traceutil/trace.go:171","msg":"trace[1509624691] transaction","detail":"{read_only:false; response_revision:2532; number_of_response:1; }","duration":"185.999699ms","start":"2024-09-15T06:42:34.365512Z","end":"2024-09-15T06:42:34.551511Z","steps":["trace[1509624691] 'process raft request'  (duration: 185.794249ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:46:08.550605Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1999}
	{"level":"info","ts":"2024-09-15T06:46:08.571570Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1999,"took":"19.956118ms","hash":702352596,"current-db-size-bytes":6864896,"current-db-size":"6.9 MB","current-db-size-in-use-bytes":4550656,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2024-09-15T06:46:08.571635Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":702352596,"revision":1999,"compact-revision":1522}
	
	
	==> gcp-auth [af20c2eee64f4fcc50e6404e2b2fed97171a5ea9e4125424f4d71e9457a7ca73] <==
	2024/09/15 06:32:56 Ready to write response ...
	2024/09/15 06:40:59 Ready to marshal response ...
	2024/09/15 06:40:59 Ready to write response ...
	2024/09/15 06:40:59 Ready to marshal response ...
	2024/09/15 06:40:59 Ready to write response ...
	2024/09/15 06:41:07 Ready to marshal response ...
	2024/09/15 06:41:07 Ready to write response ...
	2024/09/15 06:41:09 Ready to marshal response ...
	2024/09/15 06:41:09 Ready to write response ...
	2024/09/15 06:41:12 Ready to marshal response ...
	2024/09/15 06:41:12 Ready to write response ...
	2024/09/15 06:41:17 Ready to marshal response ...
	2024/09/15 06:41:17 Ready to write response ...
	2024/09/15 06:41:17 Ready to marshal response ...
	2024/09/15 06:41:17 Ready to write response ...
	2024/09/15 06:41:17 Ready to marshal response ...
	2024/09/15 06:41:17 Ready to write response ...
	2024/09/15 06:41:40 Ready to marshal response ...
	2024/09/15 06:41:40 Ready to write response ...
	2024/09/15 06:41:56 Ready to marshal response ...
	2024/09/15 06:41:56 Ready to write response ...
	2024/09/15 06:42:01 Ready to marshal response ...
	2024/09/15 06:42:01 Ready to write response ...
	2024/09/15 06:44:19 Ready to marshal response ...
	2024/09/15 06:44:19 Ready to write response ...
	
	
	==> kernel <==
	 06:47:09 up 16 min,  0 users,  load average: 0.35, 0.33, 0.37
	Linux addons-368929 5.10.207 #1 SMP Sun Sep 15 04:48:27 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0f00b1281db4182d850d3932972812c4325ffe979395b5113ea42f5ab2bd1070] <==
	I0915 06:41:22.185630       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0915 06:41:28.694030       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:37.562228       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:38.571198       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:39.581372       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:40.597002       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:41.607613       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:42.617845       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:41:43.624836       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0915 06:41:55.500605       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:55.500667       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:55.520225       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:55.520373       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:55.561239       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:55.561354       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:55.644936       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:55.645049       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0915 06:41:56.582005       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0915 06:41:56.645832       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0915 06:41:56.728173       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	W0915 06:41:56.736990       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0915 06:41:56.920189       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.144.22"}
	I0915 06:42:12.493514       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0915 06:42:13.626888       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0915 06:44:19.590482       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.68.226"}
	
	
	==> kube-controller-manager [66eb2bd2d4313e9a0d5cab0422c82ba2aed76ed46d210b85b288ac6380320e4a] <==
	W0915 06:44:47.325293       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:44:47.325351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:45:20.929044       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:20.929181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:45:24.872792       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:24.872868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:45:40.972165       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:40.972242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:45:42.386681       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:42.386845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:45:59.625631       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:59.625852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:46:02.204989       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:46:02.205105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:46:24.046422       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:46:24.046656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:46:32.884956       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:46:32.885117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:46:36.225204       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:46:36.225354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:46:57.242244       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:46:57.242459       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:47:06.491114       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:47:06.491275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:47:08.416009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="8.933µs"
	
	
	==> kube-proxy [f44a755ad6406eb88d176ec9141665d8f250054c1ff3c9fd041703fbd2f1b40c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 06:31:21.923821       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 06:31:22.107135       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.212"]
	E0915 06:31:22.107232       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:31:22.469316       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 06:31:22.469382       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 06:31:22.469406       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:31:22.502604       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:31:22.502994       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:31:22.503027       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:31:22.513405       1 config.go:199] "Starting service config controller"
	I0915 06:31:22.517572       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:31:22.517677       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:31:22.517749       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:31:22.524073       1 config.go:328] "Starting node config controller"
	I0915 06:31:22.524163       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:31:22.617837       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:31:22.617902       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:31:22.624325       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5278a91f04afed3b2b54e1332549cd11a7b6a6fe3dd6dfc404742af4ad2159da] <==
	W0915 06:31:10.099747       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 06:31:10.099810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:10.971911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 06:31:10.972055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:10.992635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:31:10.992769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.044975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 06:31:11.045071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.099337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 06:31:11.099564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.127086       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 06:31:11.127389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.176096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:31:11.176240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.192815       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:31:11.193073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.242830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:31:11.242950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.291677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:31:11.291812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.319296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 06:31:11.319464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:31:11.333004       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:31:11.333054       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0915 06:31:13.689226       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 06:46:12 addons-368929 kubelet[1209]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 15 06:46:13 addons-368929 kubelet[1209]: E0915 06:46:13.000424    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382772999836042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:13 addons-368929 kubelet[1209]: E0915 06:46:13.000508    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382772999836042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:13 addons-368929 kubelet[1209]: E0915 06:46:13.527675    1209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c8076028-6672-48b6-8085-14b06a0a0268"
	Sep 15 06:46:23 addons-368929 kubelet[1209]: E0915 06:46:23.003317    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382783002924447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:23 addons-368929 kubelet[1209]: E0915 06:46:23.003463    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382783002924447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:25 addons-368929 kubelet[1209]: E0915 06:46:25.526995    1209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c8076028-6672-48b6-8085-14b06a0a0268"
	Sep 15 06:46:33 addons-368929 kubelet[1209]: E0915 06:46:33.005869    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382793005431683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:33 addons-368929 kubelet[1209]: E0915 06:46:33.005966    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382793005431683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:40 addons-368929 kubelet[1209]: E0915 06:46:40.527132    1209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c8076028-6672-48b6-8085-14b06a0a0268"
	Sep 15 06:46:43 addons-368929 kubelet[1209]: E0915 06:46:43.008802    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382803008125207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:43 addons-368929 kubelet[1209]: E0915 06:46:43.009087    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382803008125207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:53 addons-368929 kubelet[1209]: E0915 06:46:53.011775    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382813011336586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:53 addons-368929 kubelet[1209]: E0915 06:46:53.012181    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382813011336586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:55 addons-368929 kubelet[1209]: E0915 06:46:55.526789    1209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c8076028-6672-48b6-8085-14b06a0a0268"
	Sep 15 06:47:03 addons-368929 kubelet[1209]: E0915 06:47:03.014872    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382823014387647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:47:03 addons-368929 kubelet[1209]: E0915 06:47:03.014923    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382823014387647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:47:07 addons-368929 kubelet[1209]: E0915 06:47:07.526564    1209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c8076028-6672-48b6-8085-14b06a0a0268"
	Sep 15 06:47:08 addons-368929 kubelet[1209]: I0915 06:47:08.439516    1209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-hbbg7" podStartSLOduration=166.953980709 podStartE2EDuration="2m49.439493672s" podCreationTimestamp="2024-09-15 06:44:19 +0000 UTC" firstStartedPulling="2024-09-15 06:44:20.00377918 +0000 UTC m=+787.609831402" lastFinishedPulling="2024-09-15 06:44:22.489292145 +0000 UTC m=+790.095344365" observedRunningTime="2024-09-15 06:44:22.810751687 +0000 UTC m=+790.416803926" watchObservedRunningTime="2024-09-15 06:47:08.439493672 +0000 UTC m=+956.045545913"
	Sep 15 06:47:09 addons-368929 kubelet[1209]: I0915 06:47:09.813842    1209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0443fc45-c95c-4fab-9dfe-a1b598ac6c8b-tmp-dir\") pod \"0443fc45-c95c-4fab-9dfe-a1b598ac6c8b\" (UID: \"0443fc45-c95c-4fab-9dfe-a1b598ac6c8b\") "
	Sep 15 06:47:09 addons-368929 kubelet[1209]: I0915 06:47:09.813898    1209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvxzv\" (UniqueName: \"kubernetes.io/projected/0443fc45-c95c-4fab-9dfe-a1b598ac6c8b-kube-api-access-gvxzv\") pod \"0443fc45-c95c-4fab-9dfe-a1b598ac6c8b\" (UID: \"0443fc45-c95c-4fab-9dfe-a1b598ac6c8b\") "
	Sep 15 06:47:09 addons-368929 kubelet[1209]: I0915 06:47:09.814480    1209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0443fc45-c95c-4fab-9dfe-a1b598ac6c8b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "0443fc45-c95c-4fab-9dfe-a1b598ac6c8b" (UID: "0443fc45-c95c-4fab-9dfe-a1b598ac6c8b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 15 06:47:09 addons-368929 kubelet[1209]: I0915 06:47:09.823085    1209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0443fc45-c95c-4fab-9dfe-a1b598ac6c8b-kube-api-access-gvxzv" (OuterVolumeSpecName: "kube-api-access-gvxzv") pod "0443fc45-c95c-4fab-9dfe-a1b598ac6c8b" (UID: "0443fc45-c95c-4fab-9dfe-a1b598ac6c8b"). InnerVolumeSpecName "kube-api-access-gvxzv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:47:09 addons-368929 kubelet[1209]: I0915 06:47:09.914250    1209 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0443fc45-c95c-4fab-9dfe-a1b598ac6c8b-tmp-dir\") on node \"addons-368929\" DevicePath \"\""
	Sep 15 06:47:09 addons-368929 kubelet[1209]: I0915 06:47:09.914349    1209 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gvxzv\" (UniqueName: \"kubernetes.io/projected/0443fc45-c95c-4fab-9dfe-a1b598ac6c8b-kube-api-access-gvxzv\") on node \"addons-368929\" DevicePath \"\""
	
	
	==> storage-provisioner [522296a80728916149b3ee4256e90eef6adc4203cedbb8a359d09436a8a45fa2] <==
	I0915 06:31:26.557262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:31:26.648171       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:31:26.648246       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:31:26.724533       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:31:26.725105       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99973a3d-83c9-43fb-b77d-d8ca8d8c9277", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-368929_2b5bab70-53fb-4236-bcbd-c12d04df3962 became leader
	I0915 06:31:26.729461       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-368929_2b5bab70-53fb-4236-bcbd-c12d04df3962!
	I0915 06:31:26.839908       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-368929_2b5bab70-53fb-4236-bcbd-c12d04df3962!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-368929 -n addons-368929
helpers_test.go:261: (dbg) Run:  kubectl --context addons-368929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-368929 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-368929 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-368929/192.168.39.212
	Start Time:       Sun, 15 Sep 2024 06:32:56 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rz99b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rz99b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/busybox to addons-368929
	  Normal   Pulling    12m (x4 over 14m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 14m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 14m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 14m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m13s (x43 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (327.02s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-884523 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-v4xlm" [bf8b675c-3906-4885-ac1b-995e5506d861] Pending
helpers_test.go:344: "mysql-6cdb49bbb-v4xlm" [bf8b675c-3906-4885-ac1b-995e5506d861] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1799: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1799: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-884523 -n functional-884523
functional_test.go:1799: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2024-09-15 07:01:19.115542834 +0000 UTC m=+1898.063816413
functional_test.go:1799: (dbg) Run:  kubectl --context functional-884523 describe po mysql-6cdb49bbb-v4xlm -n default
functional_test.go:1799: (dbg) kubectl --context functional-884523 describe po mysql-6cdb49bbb-v4xlm -n default:
Name:             mysql-6cdb49bbb-v4xlm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-884523/192.168.39.88
Start Time:       Sun, 15 Sep 2024 06:51:18 +0000
Labels:           app=mysql
pod-template-hash=6cdb49bbb
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/mysql-6cdb49bbb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tm74k (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tm74k:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  10m   default-scheduler  Successfully assigned default/mysql-6cdb49bbb-v4xlm to functional-884523
functional_test.go:1799: (dbg) Run:  kubectl --context functional-884523 logs mysql-6cdb49bbb-v4xlm -n default
functional_test.go:1799: (dbg) Non-zero exit: kubectl --context functional-884523 logs mysql-6cdb49bbb-v4xlm -n default: exit status 1 (68.077385ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-6cdb49bbb-v4xlm" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
functional_test.go:1799: kubectl --context functional-884523 logs mysql-6cdb49bbb-v4xlm -n default: exit status 1
functional_test.go:1801: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-884523 -n functional-884523
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-884523 logs -n 25: (1.485764321s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-884523 ssh findmnt                                           | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | -T /mount2                                                              |                   |         |         |                     |                     |
	| ssh            | functional-884523 ssh findmnt                                           | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | -T /mount3                                                              |                   |         |         |                     |                     |
	| mount          | -p functional-884523                                                    | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC |                     |
	|                | --kill=true                                                             |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                      | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | -p functional-884523                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                  |                   |         |         |                     |                     |
	| image          | functional-884523 image ls                                              | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| image          | functional-884523 image load --daemon                                   | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | kicbase/echo-server:functional-884523                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-884523 image ls                                              | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| image          | functional-884523 image load --daemon                                   | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | kicbase/echo-server:functional-884523                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-884523 image ls                                              | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| image          | functional-884523 image save kicbase/echo-server:functional-884523      | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-884523 image rm                                              | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | kicbase/echo-server:functional-884523                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-884523 image ls                                              | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| image          | functional-884523 image load                                            | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-884523 image ls                                              | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| image          | functional-884523 image save --daemon                                   | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | kicbase/echo-server:functional-884523                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| update-context | functional-884523                                                       | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-884523                                                       | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-884523                                                       | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| image          | functional-884523                                                       | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | image ls --format short                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-884523                                                       | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | image ls --format json                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-884523                                                       | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | image ls --format table                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-884523                                                       | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | image ls --format yaml                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-884523 ssh pgrep                                             | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC |                     |
	|                | buildkitd                                                               |                   |         |         |                     |                     |
	| image          | functional-884523 image build -t                                        | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | localhost/my-image:functional-884523                                    |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-884523 image ls                                              | functional-884523 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:51:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:51:16.529007   23332 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:51:16.529102   23332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:16.529112   23332 out.go:358] Setting ErrFile to fd 2...
	I0915 06:51:16.529116   23332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:16.529369   23332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 06:51:16.529973   23332 out.go:352] Setting JSON to false
	I0915 06:51:16.531180   23332 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2022,"bootTime":1726381054,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:51:16.531295   23332 start.go:139] virtualization: kvm guest
	I0915 06:51:16.533273   23332 out.go:177] * [functional-884523] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0915 06:51:16.534517   23332 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:51:16.534522   23332 notify.go:220] Checking for updates...
	I0915 06:51:16.536923   23332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:51:16.538209   23332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 06:51:16.539381   23332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 06:51:16.540689   23332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:51:16.541887   23332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:51:16.543582   23332 config.go:182] Loaded profile config "functional-884523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:51:16.544133   23332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:51:16.544184   23332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:51:16.561259   23332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32993
	I0915 06:51:16.561694   23332 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:51:16.562246   23332 main.go:141] libmachine: Using API Version  1
	I0915 06:51:16.562270   23332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:51:16.562672   23332 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:51:16.562863   23332 main.go:141] libmachine: (functional-884523) Calling .DriverName
	I0915 06:51:16.563119   23332 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:51:16.563455   23332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:51:16.563495   23332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:51:16.580483   23332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37293
	I0915 06:51:16.580941   23332 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:51:16.581486   23332 main.go:141] libmachine: Using API Version  1
	I0915 06:51:16.581513   23332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:51:16.581887   23332 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:51:16.582090   23332 main.go:141] libmachine: (functional-884523) Calling .DriverName
	I0915 06:51:16.616130   23332 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0915 06:51:16.617515   23332 start.go:297] selected driver: kvm2
	I0915 06:51:16.617529   23332 start.go:901] validating driver "kvm2" against &{Name:functional-884523 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-884523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:51:16.617637   23332 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:51:16.620062   23332 out.go:201] 
	W0915 06:51:16.621525   23332 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0915 06:51:16.622918   23332 out.go:201] 
	
	
	==> CRI-O <==
	Sep 15 07:01:19 functional-884523 crio[4668]: time="2024-09-15 07:01:19.940384813Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd68a660-a8fb-4c8f-b10f-e668067d2f9f name=/runtime.v1.RuntimeService/Version
	Sep 15 07:01:19 functional-884523 crio[4668]: time="2024-09-15 07:01:19.941626762Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c71ef588-55a0-493e-8430-45fbf1647a4b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:01:19 functional-884523 crio[4668]: time="2024-09-15 07:01:19.942420530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383679942397090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c71ef588-55a0-493e-8430-45fbf1647a4b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:01:19 functional-884523 crio[4668]: time="2024-09-15 07:01:19.942849329Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2c5f526-17aa-4da6-b335-e143b98b78f1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:01:19 functional-884523 crio[4668]: time="2024-09-15 07:01:19.942927000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2c5f526-17aa-4da6-b335-e143b98b78f1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:01:19 functional-884523 crio[4668]: time="2024-09-15 07:01:19.943347195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c99d0c938353531b34c8793afd74ca949e34e792bede00089de5eead55c4b487,PodSandboxId:a55e6889be81349e9138d4fe52071bc2c5665f1ebff601be311aea61aa2cc87f,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1726383101934792788,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94e9315c-2067-45cd-93db-729812f6b525,},Annotations:map[string]string{io.kubernetes.container.hash: 18495a64,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1377f3a3972826e10f3bcd5d471bf54b2a010e763a76b26b2cf6f8c2b24a277b,PodSandboxId:8e69d81fc1f73dfe56e267a11d4612b26ff098b65b2caaa6937dde581665bcf5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726383091696759359,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-crzz2,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: cb89fcb7-71e3-4bd1-ba10-96bd7d9aaa28,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.k
ubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af32576a7613c723818c36ad308b8f67b129226eb021a00b0890ee54d3e2b01,PodSandboxId:6d524885a027959a1776d127c42dc9ea5922bd8e9c188b64c1f6ef019dc901b1,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1726383084742711041,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-zmpzl,io.kubernetes.pod.namespace:
kubernetes-dashboard,io.kubernetes.pod.uid: 7905086a-ed69-4d7c-a0da-9ca6593f3cdf,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef0045ca2b71af567f74b0b87c4eecc39166804cd9eb84251e0996ab9e9e154,PodSandboxId:0c8e156dbb91f33237f6a2873cfb444528440368ef703764593bd1b833875749,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1726383071795703740,Labels:map[string]string{io.kubernetes.contai
ner.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 178fb322-272c-438b-9d7c-b77cdbc47499,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccce5352418e5bd513565708677d59fe610b19ed4d17fd1d79a0ec443f99f467,PodSandboxId:b475997cf0077025da25f87f184475c1980c7e7a9097276e95fefbc047836b4c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726383067024774548,Labels:map[string]string{io.kubernetes.container.
name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-k9nzw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de601a7a-5f20-44f0-a7ec-96830b9b63eb,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce23503175f9cb8052a083aec8b573e86d08e0133a50be9a801b26fc5c4d608f,PodSandboxId:dbdce717a5cb3f73751e9268d5059095f623ee86e300e6326fb989201a566891,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726383066922692319,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-9nmz9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c4089cb-f6a9-46b8-b03a-bc1055397383,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b46d1a60b7633c737551c4e1724658881268161d8f7db3f62925812c607478,PodSandboxId:f2a4c917179d303fe7c4d807023611847e1be33566608c66dfa52b6993ffed43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726383039612436501,Labels:map[string]string{io.kubernetes.container.n
ame: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a108e05-ec86-456a-82cc-97a79c63fa54,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006ecab15de09d8b09b8ae17f2c58cb2b11bb636705a1f4a24243b19c342b46b,PodSandboxId:b61dc56cdf4d74fd5c1f64092a58cfb158174cf855049ce36afaf1a8aa304513,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726383039618773572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.ku
bernetes.pod.name: kube-proxy-9t6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182f7631-15b5-4558-bb34-a354ece378c7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6966c819c643c4f1a03024c95a81e8aff0e37592d235f990953e713fb0343ed,PodSandboxId:cb4ca59ff091715fdb0472f0269f22be4b66d409d6019b2c362a18e0b2229bba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383039625014428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7
gcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6419c567-8383-46de-87eb-8bb3b81d34b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c28d11527dff4a80630af2c48e248fac40d69c57c556b05267a8baecc363a4,PodSandboxId:096ae7d22d1141336542dd428b9e021bd6ab2a73311541e14b5584fcd8ca43dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1
decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726383036007406752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce01cc0569f38cdf883700f439ae923,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf52958708a33683f537d2b2833ddf1401c79de24d436b008a9fdd77f42761bc,PodSandboxId:80891e52e9b14a70b61d6feb9122b8ac2c32bc8bbb39b4bbc8912df38d9ede9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4
e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726383035816008879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2c03043ad92c5ad4a3945a465c5f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470d1bad3362975e731d2788592476297462e0d35eb8cddd2ba85e082f12c6b5,PodSandboxId:b28b8f73ff33cc2f30c180b53237a6b1dc368cc9f0fbcf56c047f93357f3a97d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8
d9d4,State:CONTAINER_RUNNING,CreatedAt:1726383035837253646,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79fc9b25e1f0631fa2a8ce13b987000e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d88434b26d2509f74d5c57f87240493d2830dc2af7903ad7e1da75c9a7065b,PodSandboxId:f5e13ca8f1148f6944e110f7f46cf65b06728a9e04b7178395eb01b08185035b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUN
NING,CreatedAt:1726383035778044413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ae8f00e77e54b49cb732228b4fdb52,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80aa7152ddca13b2bff7829388af86cd68697b1efcd775cabfe39b2f46836e9e,PodSandboxId:3c28ddaa3bbf99ddae9b53ef7471deb850b591ead68ac6ebde92f94b927f8b9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITE
D,CreatedAt:1726382996924724783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7gcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6419c567-8383-46de-87eb-8bb3b81d34b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f87266287fb1c827c0a38310ba828f714912ddbc8f9bbd97b9dafd19b0f7f2,PodSandboxId:1a9d861be55d8714f9e9c9c17ebb76e66f99778bd487ed8edd646fa5bd9f532a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726382996682851619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a108e05-ec86-456a-82cc-97a79c63fa54,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce3a4cccef7a4a42cd6208d8fbf18c497aac2db6af5d881a8308cc6a90f8ec7,PodSandboxId:e86ec3270fef65ae4dfae62256c600596f8d279442bb04447ead9f006dd847b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaa
a5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726382996608536440,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9t6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182f7631-15b5-4558-bb34-a354ece378c7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5366e2b2a502e0c238779a3216f1848315c9ffeed427cdea7896e21f0bdee203,PodSandboxId:5310d906a8dc43f7e2037ecc5e17b6cafec4f6f8dcb10b16a6cd11553c91a687,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726382992809239018,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ae8f00e77e54b49cb732228b4fdb52,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8613a9559e555edbc0e118c2df22bf9feed9f74f93302892bab9dd0584346d0,PodSandboxId:98b42bfb3a9ec727e1ea510f2a0236f274a7a56f7730827a3423ef5209ac200e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13
b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726382992838398345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2c03043ad92c5ad4a3945a465c5f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c8c9b81cd4dafa96cef69a02453b821080f95a9a157628364bcf3c490cf96e,PodSandboxId:08f6975a27e90176960a810c7ee7d807a2ab773b332177cfc9bee07b15ae189b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726382992779896352,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79fc9b25e1f0631fa2a8ce13b987000e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2c5f526-17aa-4da6-b335-e143b98b78f1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:01:19 functional-884523 crio[4668]: time="2024-09-15 07:01:19.980758002Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c90822da-c41f-4b25-a0d6-10321740ebaf name=/runtime.v1.RuntimeService/Version
	Sep 15 07:01:19 functional-884523 crio[4668]: time="2024-09-15 07:01:19.980855520Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c90822da-c41f-4b25-a0d6-10321740ebaf name=/runtime.v1.RuntimeService/Version
	Sep 15 07:01:19 functional-884523 crio[4668]: time="2024-09-15 07:01:19.981935454Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=444800c8-ea49-4c0c-ac6b-aa481e1da0ae name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:01:19 functional-884523 crio[4668]: time="2024-09-15 07:01:19.983209297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383679983181523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=444800c8-ea49-4c0c-ac6b-aa481e1da0ae name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:01:19 functional-884523 crio[4668]: time="2024-09-15 07:01:19.983743407Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bec62bd-7776-4138-badb-818c8fbfdc57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:01:19 functional-884523 crio[4668]: time="2024-09-15 07:01:19.983847432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bec62bd-7776-4138-badb-818c8fbfdc57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:01:19 functional-884523 crio[4668]: time="2024-09-15 07:01:19.984277360Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c99d0c938353531b34c8793afd74ca949e34e792bede00089de5eead55c4b487,PodSandboxId:a55e6889be81349e9138d4fe52071bc2c5665f1ebff601be311aea61aa2cc87f,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1726383101934792788,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94e9315c-2067-45cd-93db-729812f6b525,},Annotations:map[string]string{io.kubernetes.container.hash: 18495a64,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1377f3a3972826e10f3bcd5d471bf54b2a010e763a76b26b2cf6f8c2b24a277b,PodSandboxId:8e69d81fc1f73dfe56e267a11d4612b26ff098b65b2caaa6937dde581665bcf5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726383091696759359,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-crzz2,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: cb89fcb7-71e3-4bd1-ba10-96bd7d9aaa28,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.k
ubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af32576a7613c723818c36ad308b8f67b129226eb021a00b0890ee54d3e2b01,PodSandboxId:6d524885a027959a1776d127c42dc9ea5922bd8e9c188b64c1f6ef019dc901b1,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1726383084742711041,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-zmpzl,io.kubernetes.pod.namespace:
kubernetes-dashboard,io.kubernetes.pod.uid: 7905086a-ed69-4d7c-a0da-9ca6593f3cdf,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef0045ca2b71af567f74b0b87c4eecc39166804cd9eb84251e0996ab9e9e154,PodSandboxId:0c8e156dbb91f33237f6a2873cfb444528440368ef703764593bd1b833875749,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1726383071795703740,Labels:map[string]string{io.kubernetes.contai
ner.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 178fb322-272c-438b-9d7c-b77cdbc47499,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccce5352418e5bd513565708677d59fe610b19ed4d17fd1d79a0ec443f99f467,PodSandboxId:b475997cf0077025da25f87f184475c1980c7e7a9097276e95fefbc047836b4c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726383067024774548,Labels:map[string]string{io.kubernetes.container.
name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-k9nzw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de601a7a-5f20-44f0-a7ec-96830b9b63eb,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce23503175f9cb8052a083aec8b573e86d08e0133a50be9a801b26fc5c4d608f,PodSandboxId:dbdce717a5cb3f73751e9268d5059095f623ee86e300e6326fb989201a566891,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726383066922692319,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-9nmz9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c4089cb-f6a9-46b8-b03a-bc1055397383,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b46d1a60b7633c737551c4e1724658881268161d8f7db3f62925812c607478,PodSandboxId:f2a4c917179d303fe7c4d807023611847e1be33566608c66dfa52b6993ffed43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726383039612436501,Labels:map[string]string{io.kubernetes.container.n
ame: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a108e05-ec86-456a-82cc-97a79c63fa54,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006ecab15de09d8b09b8ae17f2c58cb2b11bb636705a1f4a24243b19c342b46b,PodSandboxId:b61dc56cdf4d74fd5c1f64092a58cfb158174cf855049ce36afaf1a8aa304513,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726383039618773572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.ku
bernetes.pod.name: kube-proxy-9t6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182f7631-15b5-4558-bb34-a354ece378c7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6966c819c643c4f1a03024c95a81e8aff0e37592d235f990953e713fb0343ed,PodSandboxId:cb4ca59ff091715fdb0472f0269f22be4b66d409d6019b2c362a18e0b2229bba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383039625014428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7
gcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6419c567-8383-46de-87eb-8bb3b81d34b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c28d11527dff4a80630af2c48e248fac40d69c57c556b05267a8baecc363a4,PodSandboxId:096ae7d22d1141336542dd428b9e021bd6ab2a73311541e14b5584fcd8ca43dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1
decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726383036007406752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce01cc0569f38cdf883700f439ae923,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf52958708a33683f537d2b2833ddf1401c79de24d436b008a9fdd77f42761bc,PodSandboxId:80891e52e9b14a70b61d6feb9122b8ac2c32bc8bbb39b4bbc8912df38d9ede9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4
e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726383035816008879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2c03043ad92c5ad4a3945a465c5f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470d1bad3362975e731d2788592476297462e0d35eb8cddd2ba85e082f12c6b5,PodSandboxId:b28b8f73ff33cc2f30c180b53237a6b1dc368cc9f0fbcf56c047f93357f3a97d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8
d9d4,State:CONTAINER_RUNNING,CreatedAt:1726383035837253646,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79fc9b25e1f0631fa2a8ce13b987000e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d88434b26d2509f74d5c57f87240493d2830dc2af7903ad7e1da75c9a7065b,PodSandboxId:f5e13ca8f1148f6944e110f7f46cf65b06728a9e04b7178395eb01b08185035b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUN
NING,CreatedAt:1726383035778044413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ae8f00e77e54b49cb732228b4fdb52,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80aa7152ddca13b2bff7829388af86cd68697b1efcd775cabfe39b2f46836e9e,PodSandboxId:3c28ddaa3bbf99ddae9b53ef7471deb850b591ead68ac6ebde92f94b927f8b9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITE
D,CreatedAt:1726382996924724783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7gcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6419c567-8383-46de-87eb-8bb3b81d34b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f87266287fb1c827c0a38310ba828f714912ddbc8f9bbd97b9dafd19b0f7f2,PodSandboxId:1a9d861be55d8714f9e9c9c17ebb76e66f99778bd487ed8edd646fa5bd9f532a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726382996682851619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a108e05-ec86-456a-82cc-97a79c63fa54,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce3a4cccef7a4a42cd6208d8fbf18c497aac2db6af5d881a8308cc6a90f8ec7,PodSandboxId:e86ec3270fef65ae4dfae62256c600596f8d279442bb04447ead9f006dd847b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaa
a5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726382996608536440,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9t6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182f7631-15b5-4558-bb34-a354ece378c7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5366e2b2a502e0c238779a3216f1848315c9ffeed427cdea7896e21f0bdee203,PodSandboxId:5310d906a8dc43f7e2037ecc5e17b6cafec4f6f8dcb10b16a6cd11553c91a687,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726382992809239018,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ae8f00e77e54b49cb732228b4fdb52,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8613a9559e555edbc0e118c2df22bf9feed9f74f93302892bab9dd0584346d0,PodSandboxId:98b42bfb3a9ec727e1ea510f2a0236f274a7a56f7730827a3423ef5209ac200e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13
b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726382992838398345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2c03043ad92c5ad4a3945a465c5f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c8c9b81cd4dafa96cef69a02453b821080f95a9a157628364bcf3c490cf96e,PodSandboxId:08f6975a27e90176960a810c7ee7d807a2ab773b332177cfc9bee07b15ae189b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726382992779896352,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79fc9b25e1f0631fa2a8ce13b987000e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bec62bd-7776-4138-badb-818c8fbfdc57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:01:20 functional-884523 crio[4668]: time="2024-09-15 07:01:20.003671383Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2302a7db-5257-4457-b1f7-be16538e4b49 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 15 07:01:20 functional-884523 crio[4668]: time="2024-09-15 07:01:20.004103321Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a55e6889be81349e9138d4fe52071bc2c5665f1ebff601be311aea61aa2cc87f,Metadata:&PodSandboxMetadata{Name:sp-pod,Uid:94e9315c-2067-45cd-93db-729812f6b525,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726383100756220561,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94e9315c-2067-45cd-93db-729812f6b525,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"docker.io/nginx\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volu
mes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2024-09-15T06:51:40.449256602Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8e69d81fc1f73dfe56e267a11d4612b26ff098b65b2caaa6937dde581665bcf5,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-695b96c756-crzz2,Uid:cb89fcb7-71e3-4bd1-ba10-96bd7d9aaa28,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726383081102208439,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-crzz2,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: cb89fcb7-71e3-4bd1-ba10-96bd7d9aaa28,k8s-app: kubernetes-dashboard,pod-template-hash: 695b96c756,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:51:20.786599803Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6d524885a027959a1776d127c42dc9ea5922bd8e9c188b64c1f6ef019dc901b1,
Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-c5db448b4-zmpzl,Uid:7905086a-ed69-4d7c-a0da-9ca6593f3cdf,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726383081086602227,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-zmpzl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7905086a-ed69-4d7c-a0da-9ca6593f3cdf,k8s-app: dashboard-metrics-scraper,pod-template-hash: c5db448b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:51:20.766590874Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:0c8e156dbb91f33237f6a2873cfb444528440368ef703764593bd1b833875749,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:178fb322-272c-438b-9d7c-b77cdbc47499,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726383067662815227,Labels:map[string]string{integration-test: busybox-mount,io.ku
bernetes.container.name: POD,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 178fb322-272c-438b-9d7c-b77cdbc47499,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:51:07.356146013Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b475997cf0077025da25f87f184475c1980c7e7a9097276e95fefbc047836b4c,Metadata:&PodSandboxMetadata{Name:hello-node-connect-67bdd5bbb4-k9nzw,Uid:de601a7a-5f20-44f0-a7ec-96830b9b63eb,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726383063499301879,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-k9nzw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de601a7a-5f20-44f0-a7ec-96830b9b63eb,pod-template-hash: 67bdd5bbb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:51:03.187181862Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dbdce717a5cb3f7375
1e9268d5059095f623ee86e300e6326fb989201a566891,Metadata:&PodSandboxMetadata{Name:hello-node-6b9f76b5c7-9nmz9,Uid:7c4089cb-f6a9-46b8-b03a-bc1055397383,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726383063088444741,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-6b9f76b5c7-9nmz9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c4089cb-f6a9-46b8-b03a-bc1055397383,pod-template-hash: 6b9f76b5c7,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:51:02.776612678Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:096ae7d22d1141336542dd428b9e021bd6ab2a73311541e14b5584fcd8ca43dd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-884523,Uid:7ce01cc0569f38cdf883700f439ae923,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726383035790647512,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-funct
ional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce01cc0569f38cdf883700f439ae923,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.88:8441,kubernetes.io/config.hash: 7ce01cc0569f38cdf883700f439ae923,kubernetes.io/config.seen: 2024-09-15T06:50:35.291137377Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cb4ca59ff091715fdb0472f0269f22be4b66d409d6019b2c362a18e0b2229bba,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-7gcsm,Uid:6419c567-8383-46de-87eb-8bb3b81d34b7,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1726383032754052269,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-7gcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6419c567-8383-46de-87eb-8bb3b81d34b7,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:49:56.098224
045Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b28b8f73ff33cc2f30c180b53237a6b1dc368cc9f0fbcf56c047f93357f3a97d,Metadata:&PodSandboxMetadata{Name:etcd-functional-884523,Uid:79fc9b25e1f0631fa2a8ce13b987000e,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1726383032698864064,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79fc9b25e1f0631fa2a8ce13b987000e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.88:2379,kubernetes.io/config.hash: 79fc9b25e1f0631fa2a8ce13b987000e,kubernetes.io/config.seen: 2024-09-15T06:49:52.091741090Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f2a4c917179d303fe7c4d807023611847e1be33566608c66dfa52b6993ffed43,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7a108e05-ec86-456a-82cc-97a79c63fa54,Namespace:kub
e-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1726383032688032878,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a108e05-ec86-456a-82cc-97a79c63fa54,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"h
ostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-15T06:49:56.098236331Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:80891e52e9b14a70b61d6feb9122b8ac2c32bc8bbb39b4bbc8912df38d9ede9b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-884523,Uid:5c2c03043ad92c5ad4a3945a465c5f8c,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1726383032674623025,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2c03043ad92c5ad4a3945a465c5f8c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5c2c03043ad92c5ad4a3945a465c5f8c,kubernetes.io/config.seen: 2024-09-15T06:49:52.091747980Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f5e13ca8f1148f6944e110f7f46cf65b06728a9e04b7178395eb01b08185035b,Metadata:&PodSandboxMetada
ta{Name:kube-controller-manager-functional-884523,Uid:c0ae8f00e77e54b49cb732228b4fdb52,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1726383032627466296,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ae8f00e77e54b49cb732228b4fdb52,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c0ae8f00e77e54b49cb732228b4fdb52,kubernetes.io/config.seen: 2024-09-15T06:49:52.091746852Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b61dc56cdf4d74fd5c1f64092a58cfb158174cf855049ce36afaf1a8aa304513,Metadata:&PodSandboxMetadata{Name:kube-proxy-9t6sn,Uid:182f7631-15b5-4558-bb34-a354ece378c7,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1726383032574301222,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.na
me: kube-proxy-9t6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182f7631-15b5-4558-bb34-a354ece378c7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:49:56.098233665Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c28ddaa3bbf99ddae9b53ef7471deb850b591ead68ac6ebde92f94b927f8b9d,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-7gcsm,Uid:6419c567-8383-46de-87eb-8bb3b81d34b7,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726382996568420839,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-7gcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6419c567-8383-46de-87eb-8bb3b81d34b7,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:49:56.098224045Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1a9d861be55d8714f9e9c9c17ebb76
e66f99778bd487ed8edd646fa5bd9f532a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7a108e05-ec86-456a-82cc-97a79c63fa54,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726382996421632719,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a108e05-ec86-456a-82cc-97a79c63fa54,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"
volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-15T06:49:56.098236331Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e86ec3270fef65ae4dfae62256c600596f8d279442bb04447ead9f006dd847b4,Metadata:&PodSandboxMetadata{Name:kube-proxy-9t6sn,Uid:182f7631-15b5-4558-bb34-a354ece378c7,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726382996419589138,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9t6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182f7631-15b5-4558-bb34-a354ece378c7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T06:49:56.098233665Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&Pod
Sandbox{Id:5310d906a8dc43f7e2037ecc5e17b6cafec4f6f8dcb10b16a6cd11553c91a687,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-884523,Uid:c0ae8f00e77e54b49cb732228b4fdb52,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726382992592714352,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ae8f00e77e54b49cb732228b4fdb52,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c0ae8f00e77e54b49cb732228b4fdb52,kubernetes.io/config.seen: 2024-09-15T06:49:52.091746852Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:98b42bfb3a9ec727e1ea510f2a0236f274a7a56f7730827a3423ef5209ac200e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-884523,Uid:5c2c03043ad92c5ad4a3945a465c5f8c,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726382992591329
193,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2c03043ad92c5ad4a3945a465c5f8c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5c2c03043ad92c5ad4a3945a465c5f8c,kubernetes.io/config.seen: 2024-09-15T06:49:52.091747980Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:08f6975a27e90176960a810c7ee7d807a2ab773b332177cfc9bee07b15ae189b,Metadata:&PodSandboxMetadata{Name:etcd-functional-884523,Uid:79fc9b25e1f0631fa2a8ce13b987000e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726382992587857901,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79fc9b25e1f0631fa2a8ce13b987000e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etc
d.advertise-client-urls: https://192.168.39.88:2379,kubernetes.io/config.hash: 79fc9b25e1f0631fa2a8ce13b987000e,kubernetes.io/config.seen: 2024-09-15T06:49:52.091741090Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2302a7db-5257-4457-b1f7-be16538e4b49 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 15 07:01:20 functional-884523 crio[4668]: time="2024-09-15 07:01:20.005152867Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1650bf64-522e-4a2f-ad24-74372418c21a name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:01:20 functional-884523 crio[4668]: time="2024-09-15 07:01:20.005358414Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1650bf64-522e-4a2f-ad24-74372418c21a name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:01:20 functional-884523 crio[4668]: time="2024-09-15 07:01:20.006166656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c99d0c938353531b34c8793afd74ca949e34e792bede00089de5eead55c4b487,PodSandboxId:a55e6889be81349e9138d4fe52071bc2c5665f1ebff601be311aea61aa2cc87f,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1726383101934792788,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94e9315c-2067-45cd-93db-729812f6b525,},Annotations:map[string]string{io.kubernetes.container.hash: 18495a64,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1377f3a3972826e10f3bcd5d471bf54b2a010e763a76b26b2cf6f8c2b24a277b,PodSandboxId:8e69d81fc1f73dfe56e267a11d4612b26ff098b65b2caaa6937dde581665bcf5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726383091696759359,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-crzz2,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: cb89fcb7-71e3-4bd1-ba10-96bd7d9aaa28,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.k
ubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af32576a7613c723818c36ad308b8f67b129226eb021a00b0890ee54d3e2b01,PodSandboxId:6d524885a027959a1776d127c42dc9ea5922bd8e9c188b64c1f6ef019dc901b1,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1726383084742711041,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-zmpzl,io.kubernetes.pod.namespace:
kubernetes-dashboard,io.kubernetes.pod.uid: 7905086a-ed69-4d7c-a0da-9ca6593f3cdf,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef0045ca2b71af567f74b0b87c4eecc39166804cd9eb84251e0996ab9e9e154,PodSandboxId:0c8e156dbb91f33237f6a2873cfb444528440368ef703764593bd1b833875749,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1726383071795703740,Labels:map[string]string{io.kubernetes.contai
ner.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 178fb322-272c-438b-9d7c-b77cdbc47499,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccce5352418e5bd513565708677d59fe610b19ed4d17fd1d79a0ec443f99f467,PodSandboxId:b475997cf0077025da25f87f184475c1980c7e7a9097276e95fefbc047836b4c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726383067024774548,Labels:map[string]string{io.kubernetes.container.
name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-k9nzw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de601a7a-5f20-44f0-a7ec-96830b9b63eb,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce23503175f9cb8052a083aec8b573e86d08e0133a50be9a801b26fc5c4d608f,PodSandboxId:dbdce717a5cb3f73751e9268d5059095f623ee86e300e6326fb989201a566891,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726383066922692319,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-9nmz9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c4089cb-f6a9-46b8-b03a-bc1055397383,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b46d1a60b7633c737551c4e1724658881268161d8f7db3f62925812c607478,PodSandboxId:f2a4c917179d303fe7c4d807023611847e1be33566608c66dfa52b6993ffed43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726383039612436501,Labels:map[string]string{io.kubernetes.container.n
ame: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a108e05-ec86-456a-82cc-97a79c63fa54,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006ecab15de09d8b09b8ae17f2c58cb2b11bb636705a1f4a24243b19c342b46b,PodSandboxId:b61dc56cdf4d74fd5c1f64092a58cfb158174cf855049ce36afaf1a8aa304513,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726383039618773572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.ku
bernetes.pod.name: kube-proxy-9t6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182f7631-15b5-4558-bb34-a354ece378c7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6966c819c643c4f1a03024c95a81e8aff0e37592d235f990953e713fb0343ed,PodSandboxId:cb4ca59ff091715fdb0472f0269f22be4b66d409d6019b2c362a18e0b2229bba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383039625014428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7
gcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6419c567-8383-46de-87eb-8bb3b81d34b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c28d11527dff4a80630af2c48e248fac40d69c57c556b05267a8baecc363a4,PodSandboxId:096ae7d22d1141336542dd428b9e021bd6ab2a73311541e14b5584fcd8ca43dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1
decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726383036007406752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce01cc0569f38cdf883700f439ae923,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf52958708a33683f537d2b2833ddf1401c79de24d436b008a9fdd77f42761bc,PodSandboxId:80891e52e9b14a70b61d6feb9122b8ac2c32bc8bbb39b4bbc8912df38d9ede9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4
e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726383035816008879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2c03043ad92c5ad4a3945a465c5f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470d1bad3362975e731d2788592476297462e0d35eb8cddd2ba85e082f12c6b5,PodSandboxId:b28b8f73ff33cc2f30c180b53237a6b1dc368cc9f0fbcf56c047f93357f3a97d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8
d9d4,State:CONTAINER_RUNNING,CreatedAt:1726383035837253646,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79fc9b25e1f0631fa2a8ce13b987000e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d88434b26d2509f74d5c57f87240493d2830dc2af7903ad7e1da75c9a7065b,PodSandboxId:f5e13ca8f1148f6944e110f7f46cf65b06728a9e04b7178395eb01b08185035b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUN
NING,CreatedAt:1726383035778044413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ae8f00e77e54b49cb732228b4fdb52,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80aa7152ddca13b2bff7829388af86cd68697b1efcd775cabfe39b2f46836e9e,PodSandboxId:3c28ddaa3bbf99ddae9b53ef7471deb850b591ead68ac6ebde92f94b927f8b9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITE
D,CreatedAt:1726382996924724783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7gcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6419c567-8383-46de-87eb-8bb3b81d34b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f87266287fb1c827c0a38310ba828f714912ddbc8f9bbd97b9dafd19b0f7f2,PodSandboxId:1a9d861be55d8714f9e9c9c17ebb76e66f99778bd487ed8edd646fa5bd9f532a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726382996682851619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a108e05-ec86-456a-82cc-97a79c63fa54,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce3a4cccef7a4a42cd6208d8fbf18c497aac2db6af5d881a8308cc6a90f8ec7,PodSandboxId:e86ec3270fef65ae4dfae62256c600596f8d279442bb04447ead9f006dd847b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaa
a5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726382996608536440,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9t6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182f7631-15b5-4558-bb34-a354ece378c7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5366e2b2a502e0c238779a3216f1848315c9ffeed427cdea7896e21f0bdee203,PodSandboxId:5310d906a8dc43f7e2037ecc5e17b6cafec4f6f8dcb10b16a6cd11553c91a687,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726382992809239018,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ae8f00e77e54b49cb732228b4fdb52,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8613a9559e555edbc0e118c2df22bf9feed9f74f93302892bab9dd0584346d0,PodSandboxId:98b42bfb3a9ec727e1ea510f2a0236f274a7a56f7730827a3423ef5209ac200e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13
b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726382992838398345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2c03043ad92c5ad4a3945a465c5f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c8c9b81cd4dafa96cef69a02453b821080f95a9a157628364bcf3c490cf96e,PodSandboxId:08f6975a27e90176960a810c7ee7d807a2ab773b332177cfc9bee07b15ae189b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726382992779896352,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79fc9b25e1f0631fa2a8ce13b987000e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1650bf64-522e-4a2f-ad24-74372418c21a name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:01:20 functional-884523 crio[4668]: time="2024-09-15 07:01:20.032451041Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45ee4c6c-4ed0-45ea-9e1d-ba1d517a82f8 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:01:20 functional-884523 crio[4668]: time="2024-09-15 07:01:20.032552115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45ee4c6c-4ed0-45ea-9e1d-ba1d517a82f8 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:01:20 functional-884523 crio[4668]: time="2024-09-15 07:01:20.033667723Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fb8fc3c-9032-4984-a1c0-61f31c6391b7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:01:20 functional-884523 crio[4668]: time="2024-09-15 07:01:20.034504619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383680034479412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fb8fc3c-9032-4984-a1c0-61f31c6391b7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:01:20 functional-884523 crio[4668]: time="2024-09-15 07:01:20.035269192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be208e56-0b11-4a25-9826-143303d583f3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:01:20 functional-884523 crio[4668]: time="2024-09-15 07:01:20.035330773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be208e56-0b11-4a25-9826-143303d583f3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:01:20 functional-884523 crio[4668]: time="2024-09-15 07:01:20.035942792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c99d0c938353531b34c8793afd74ca949e34e792bede00089de5eead55c4b487,PodSandboxId:a55e6889be81349e9138d4fe52071bc2c5665f1ebff601be311aea61aa2cc87f,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1726383101934792788,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 94e9315c-2067-45cd-93db-729812f6b525,},Annotations:map[string]string{io.kubernetes.container.hash: 18495a64,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1377f3a3972826e10f3bcd5d471bf54b2a010e763a76b26b2cf6f8c2b24a277b,PodSandboxId:8e69d81fc1f73dfe56e267a11d4612b26ff098b65b2caaa6937dde581665bcf5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726383091696759359,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-crzz2,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: cb89fcb7-71e3-4bd1-ba10-96bd7d9aaa28,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.k
ubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af32576a7613c723818c36ad308b8f67b129226eb021a00b0890ee54d3e2b01,PodSandboxId:6d524885a027959a1776d127c42dc9ea5922bd8e9c188b64c1f6ef019dc901b1,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1726383084742711041,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-zmpzl,io.kubernetes.pod.namespace:
kubernetes-dashboard,io.kubernetes.pod.uid: 7905086a-ed69-4d7c-a0da-9ca6593f3cdf,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef0045ca2b71af567f74b0b87c4eecc39166804cd9eb84251e0996ab9e9e154,PodSandboxId:0c8e156dbb91f33237f6a2873cfb444528440368ef703764593bd1b833875749,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1726383071795703740,Labels:map[string]string{io.kubernetes.contai
ner.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 178fb322-272c-438b-9d7c-b77cdbc47499,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccce5352418e5bd513565708677d59fe610b19ed4d17fd1d79a0ec443f99f467,PodSandboxId:b475997cf0077025da25f87f184475c1980c7e7a9097276e95fefbc047836b4c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726383067024774548,Labels:map[string]string{io.kubernetes.container.
name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-k9nzw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de601a7a-5f20-44f0-a7ec-96830b9b63eb,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce23503175f9cb8052a083aec8b573e86d08e0133a50be9a801b26fc5c4d608f,PodSandboxId:dbdce717a5cb3f73751e9268d5059095f623ee86e300e6326fb989201a566891,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726383066922692319,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-9nmz9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c4089cb-f6a9-46b8-b03a-bc1055397383,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b46d1a60b7633c737551c4e1724658881268161d8f7db3f62925812c607478,PodSandboxId:f2a4c917179d303fe7c4d807023611847e1be33566608c66dfa52b6993ffed43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726383039612436501,Labels:map[string]string{io.kubernetes.container.n
ame: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a108e05-ec86-456a-82cc-97a79c63fa54,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006ecab15de09d8b09b8ae17f2c58cb2b11bb636705a1f4a24243b19c342b46b,PodSandboxId:b61dc56cdf4d74fd5c1f64092a58cfb158174cf855049ce36afaf1a8aa304513,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726383039618773572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.ku
bernetes.pod.name: kube-proxy-9t6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182f7631-15b5-4558-bb34-a354ece378c7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6966c819c643c4f1a03024c95a81e8aff0e37592d235f990953e713fb0343ed,PodSandboxId:cb4ca59ff091715fdb0472f0269f22be4b66d409d6019b2c362a18e0b2229bba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383039625014428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7
gcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6419c567-8383-46de-87eb-8bb3b81d34b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c28d11527dff4a80630af2c48e248fac40d69c57c556b05267a8baecc363a4,PodSandboxId:096ae7d22d1141336542dd428b9e021bd6ab2a73311541e14b5584fcd8ca43dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1
decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726383036007406752,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ce01cc0569f38cdf883700f439ae923,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf52958708a33683f537d2b2833ddf1401c79de24d436b008a9fdd77f42761bc,PodSandboxId:80891e52e9b14a70b61d6feb9122b8ac2c32bc8bbb39b4bbc8912df38d9ede9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4
e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726383035816008879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2c03043ad92c5ad4a3945a465c5f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470d1bad3362975e731d2788592476297462e0d35eb8cddd2ba85e082f12c6b5,PodSandboxId:b28b8f73ff33cc2f30c180b53237a6b1dc368cc9f0fbcf56c047f93357f3a97d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8
d9d4,State:CONTAINER_RUNNING,CreatedAt:1726383035837253646,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79fc9b25e1f0631fa2a8ce13b987000e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d88434b26d2509f74d5c57f87240493d2830dc2af7903ad7e1da75c9a7065b,PodSandboxId:f5e13ca8f1148f6944e110f7f46cf65b06728a9e04b7178395eb01b08185035b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUN
NING,CreatedAt:1726383035778044413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ae8f00e77e54b49cb732228b4fdb52,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80aa7152ddca13b2bff7829388af86cd68697b1efcd775cabfe39b2f46836e9e,PodSandboxId:3c28ddaa3bbf99ddae9b53ef7471deb850b591ead68ac6ebde92f94b927f8b9d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITE
D,CreatedAt:1726382996924724783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7gcsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6419c567-8383-46de-87eb-8bb3b81d34b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f87266287fb1c827c0a38310ba828f714912ddbc8f9bbd97b9dafd19b0f7f2,PodSandboxId:1a9d861be55d8714f9e9c9c17ebb76e66f99778bd487ed8edd646fa5bd9f532a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726382996682851619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a108e05-ec86-456a-82cc-97a79c63fa54,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce3a4cccef7a4a42cd6208d8fbf18c497aac2db6af5d881a8308cc6a90f8ec7,PodSandboxId:e86ec3270fef65ae4dfae62256c600596f8d279442bb04447ead9f006dd847b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaa
a5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726382996608536440,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9t6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182f7631-15b5-4558-bb34-a354ece378c7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5366e2b2a502e0c238779a3216f1848315c9ffeed427cdea7896e21f0bdee203,PodSandboxId:5310d906a8dc43f7e2037ecc5e17b6cafec4f6f8dcb10b16a6cd11553c91a687,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726382992809239018,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ae8f00e77e54b49cb732228b4fdb52,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8613a9559e555edbc0e118c2df22bf9feed9f74f93302892bab9dd0584346d0,PodSandboxId:98b42bfb3a9ec727e1ea510f2a0236f274a7a56f7730827a3423ef5209ac200e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13
b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726382992838398345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c2c03043ad92c5ad4a3945a465c5f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c8c9b81cd4dafa96cef69a02453b821080f95a9a157628364bcf3c490cf96e,PodSandboxId:08f6975a27e90176960a810c7ee7d807a2ab773b332177cfc9bee07b15ae189b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726382992779896352,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-884523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79fc9b25e1f0631fa2a8ce13b987000e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be208e56-0b11-4a25-9826-143303d583f3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	c99d0c9383535       docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                  9 minutes ago       Running             myfrontend                  0                   a55e6889be813       sp-pod
	1377f3a397282       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         9 minutes ago       Running             kubernetes-dashboard        0                   8e69d81fc1f73       kubernetes-dashboard-695b96c756-crzz2
	2af32576a7613       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   6d524885a0279       dashboard-metrics-scraper-c5db448b4-zmpzl
	3ef0045ca2b71       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   0c8e156dbb91f       busybox-mount
	ccce5352418e5       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   b475997cf0077       hello-node-connect-67bdd5bbb4-k9nzw
	ce23503175f9c       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   dbdce717a5cb3       hello-node-6b9f76b5c7-9nmz9
	b6966c819c643       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 10 minutes ago      Running             coredns                     2                   cb4ca59ff0917       coredns-7c65d6cfc9-7gcsm
	006ecab15de09       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 10 minutes ago      Running             kube-proxy                  2                   b61dc56cdf4d7       kube-proxy-9t6sn
	54b46d1a60b76       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   f2a4c917179d3       storage-provisioner
	d6c28d11527df       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 10 minutes ago      Running             kube-apiserver              0                   096ae7d22d114       kube-apiserver-functional-884523
	470d1bad33629       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 10 minutes ago      Running             etcd                        2                   b28b8f73ff33c       etcd-functional-884523
	cf52958708a33       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 10 minutes ago      Running             kube-scheduler              2                   80891e52e9b14       kube-scheduler-functional-884523
	b4d88434b26d2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 10 minutes ago      Running             kube-controller-manager     2                   f5e13ca8f1148       kube-controller-manager-functional-884523
	80aa7152ddca1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 11 minutes ago      Exited              coredns                     1                   3c28ddaa3bbf9       coredns-7c65d6cfc9-7gcsm
	84f87266287fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   1a9d861be55d8       storage-provisioner
	cce3a4cccef7a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 11 minutes ago      Exited              kube-proxy                  1                   e86ec3270fef6       kube-proxy-9t6sn
	b8613a9559e55       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 11 minutes ago      Exited              kube-scheduler              1                   98b42bfb3a9ec       kube-scheduler-functional-884523
	5366e2b2a502e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 11 minutes ago      Exited              kube-controller-manager     1                   5310d906a8dc4       kube-controller-manager-functional-884523
	58c8c9b81cd4d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 11 minutes ago      Exited              etcd                        1                   08f6975a27e90       etcd-functional-884523
	
	
	==> coredns [80aa7152ddca13b2bff7829388af86cd68697b1efcd775cabfe39b2f46836e9e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:36618 - 47356 "HINFO IN 5886555240182034265.2174842513775997864. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009930332s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b6966c819c643c4f1a03024c95a81e8aff0e37592d235f990953e713fb0343ed] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37465 - 51032 "HINFO IN 9018768785766265875.2691034095053066816. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015074963s
	
	
	==> describe nodes <==
	Name:               functional-884523
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-884523
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=functional-884523
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_48_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:48:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-884523
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:01:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:56:46 +0000   Sun, 15 Sep 2024 06:48:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:56:46 +0000   Sun, 15 Sep 2024 06:48:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:56:46 +0000   Sun, 15 Sep 2024 06:48:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:56:46 +0000   Sun, 15 Sep 2024 06:48:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.88
	  Hostname:    functional-884523
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 af6c5e287ecd42878ba5f2290478c343
	  System UUID:                af6c5e28-7ecd-4287-8ba5-f2290478c343
	  Boot ID:                    3d8e19e4-2101-4209-a781-54495099ed92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-9nmz9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-67bdd5bbb4-k9nzw          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-6cdb49bbb-v4xlm                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kube-system                 coredns-7c65d6cfc9-7gcsm                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-884523                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-884523             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-884523    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-9t6sn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-884523             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-zmpzl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-crzz2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-884523 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-884523 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-884523 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-884523 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-884523 event: Registered Node functional-884523 in Controller
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-884523 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-884523 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-884523 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-884523 event: Registered Node functional-884523 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-884523 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-884523 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-884523 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-884523 event: Registered Node functional-884523 in Controller
	
	
	==> dmesg <==
	[  +0.126003] systemd-fstab-generator[2366]: Ignoring "noauto" option for root device
	[  +0.270083] systemd-fstab-generator[2394]: Ignoring "noauto" option for root device
	[  +8.570695] systemd-fstab-generator[2521]: Ignoring "noauto" option for root device
	[  +0.078893] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.023694] systemd-fstab-generator[2641]: Ignoring "noauto" option for root device
	[  +4.557020] kauditd_printk_skb: 74 callbacks suppressed
	[Sep15 06:50] systemd-fstab-generator[3394]: Ignoring "noauto" option for root device
	[  +0.089165] kauditd_printk_skb: 37 callbacks suppressed
	[ +17.672517] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.626337] systemd-fstab-generator[4584]: Ignoring "noauto" option for root device
	[  +0.143427] systemd-fstab-generator[4607]: Ignoring "noauto" option for root device
	[  +0.160170] systemd-fstab-generator[4621]: Ignoring "noauto" option for root device
	[  +0.164986] systemd-fstab-generator[4633]: Ignoring "noauto" option for root device
	[  +0.265378] systemd-fstab-generator[4661]: Ignoring "noauto" option for root device
	[  +1.227836] systemd-fstab-generator[5100]: Ignoring "noauto" option for root device
	[  +2.040495] systemd-fstab-generator[5293]: Ignoring "noauto" option for root device
	[  +0.372940] kauditd_printk_skb: 251 callbacks suppressed
	[  +7.042858] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.317401] systemd-fstab-generator[5877]: Ignoring "noauto" option for root device
	[  +6.120604] kauditd_printk_skb: 12 callbacks suppressed
	[Sep15 06:51] kauditd_printk_skb: 45 callbacks suppressed
	[  +6.877071] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.099981] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.281858] kauditd_printk_skb: 44 callbacks suppressed
	[ +14.116857] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [470d1bad3362975e731d2788592476297462e0d35eb8cddd2ba85e082f12c6b5] <==
	{"level":"info","ts":"2024-09-15T06:50:37.431030Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:50:37.431954Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:50:37.432767Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T06:50:37.433928Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:50:37.435698Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.88:2379"}
	{"level":"info","ts":"2024-09-15T06:50:37.436769Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T06:50:37.436816Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T06:51:17.908472Z","caller":"traceutil/trace.go:171","msg":"trace[1611109060] linearizableReadLoop","detail":"{readStateIndex:816; appliedIndex:815; }","duration":"303.239186ms","start":"2024-09-15T06:51:17.605215Z","end":"2024-09-15T06:51:17.908454Z","steps":["trace[1611109060] 'read index received'  (duration: 303.028209ms)","trace[1611109060] 'applied index is now lower than readState.Index'  (duration: 210.514µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:51:17.908688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"303.406325ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:51:17.908730Z","caller":"traceutil/trace.go:171","msg":"trace[1053341216] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:747; }","duration":"303.510393ms","start":"2024-09-15T06:51:17.605208Z","end":"2024-09-15T06:51:17.908718Z","steps":["trace[1053341216] 'agreement among raft nodes before linearized reading'  (duration: 303.368895ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:51:17.909166Z","caller":"traceutil/trace.go:171","msg":"trace[332618849] transaction","detail":"{read_only:false; response_revision:747; number_of_response:1; }","duration":"466.196778ms","start":"2024-09-15T06:51:17.442956Z","end":"2024-09-15T06:51:17.909153Z","steps":["trace[332618849] 'process raft request'  (duration: 465.334449ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:51:17.909628Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-15T06:51:17.442937Z","time spent":"466.280909ms","remote":"127.0.0.1:57062","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:746 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-15T06:51:28.292247Z","caller":"traceutil/trace.go:171","msg":"trace[1817952811] linearizableReadLoop","detail":"{readStateIndex:911; appliedIndex:910; }","duration":"272.623433ms","start":"2024-09-15T06:51:28.019610Z","end":"2024-09-15T06:51:28.292233Z","steps":["trace[1817952811] 'read index received'  (duration: 272.386662ms)","trace[1817952811] 'applied index is now lower than readState.Index'  (duration: 234.167µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:51:28.292353Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.72543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:51:28.292378Z","caller":"traceutil/trace.go:171","msg":"trace[604048691] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:840; }","duration":"272.76893ms","start":"2024-09-15T06:51:28.019604Z","end":"2024-09-15T06:51:28.292372Z","steps":["trace[604048691] 'agreement among raft nodes before linearized reading'  (duration: 272.707051ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:51:28.292473Z","caller":"traceutil/trace.go:171","msg":"trace[1389833191] transaction","detail":"{read_only:false; response_revision:840; number_of_response:1; }","duration":"312.845628ms","start":"2024-09-15T06:51:27.979614Z","end":"2024-09-15T06:51:28.292460Z","steps":["trace[1389833191] 'process raft request'  (duration: 312.511603ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:51:28.292548Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-15T06:51:27.979590Z","time spent":"312.913946ms","remote":"127.0.0.1:57062","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:836 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-15T06:51:30.182210Z","caller":"traceutil/trace.go:171","msg":"trace[453874088] linearizableReadLoop","detail":"{readStateIndex:913; appliedIndex:912; }","duration":"163.950177ms","start":"2024-09-15T06:51:30.018246Z","end":"2024-09-15T06:51:30.182196Z","steps":["trace[453874088] 'read index received'  (duration: 162.79781ms)","trace[453874088] 'applied index is now lower than readState.Index'  (duration: 1.151752ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:51:30.182386Z","caller":"traceutil/trace.go:171","msg":"trace[956137589] transaction","detail":"{read_only:false; response_revision:842; number_of_response:1; }","duration":"204.624033ms","start":"2024-09-15T06:51:29.977755Z","end":"2024-09-15T06:51:30.182379Z","steps":["trace[956137589] 'process raft request'  (duration: 203.387591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:51:30.182511Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.251733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:51:30.182529Z","caller":"traceutil/trace.go:171","msg":"trace[168630513] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:842; }","duration":"164.28183ms","start":"2024-09-15T06:51:30.018241Z","end":"2024-09-15T06:51:30.182523Z","steps":["trace[168630513] 'agreement among raft nodes before linearized reading'  (duration: 164.238214ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:51:36.515755Z","caller":"traceutil/trace.go:171","msg":"trace[1537415989] transaction","detail":"{read_only:false; response_revision:861; number_of_response:1; }","duration":"180.013743ms","start":"2024-09-15T06:51:36.335727Z","end":"2024-09-15T06:51:36.515741Z","steps":["trace[1537415989] 'process raft request'  (duration: 179.909024ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:00:37.469547Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1068}
	{"level":"info","ts":"2024-09-15T07:00:37.494783Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1068,"took":"24.777689ms","hash":6231821,"current-db-size-bytes":3735552,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1437696,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2024-09-15T07:00:37.494947Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":6231821,"revision":1068,"compact-revision":-1}
	
	
	==> etcd [58c8c9b81cd4dafa96cef69a02453b821080f95a9a157628364bcf3c490cf96e] <==
	{"level":"info","ts":"2024-09-15T06:49:54.681744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-15T06:49:54.681763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af received MsgPreVoteResp from aa0bd43d5988e1af at term 2"}
	{"level":"info","ts":"2024-09-15T06:49:54.681774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became candidate at term 3"}
	{"level":"info","ts":"2024-09-15T06:49:54.681780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af received MsgVoteResp from aa0bd43d5988e1af at term 3"}
	{"level":"info","ts":"2024-09-15T06:49:54.681787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became leader at term 3"}
	{"level":"info","ts":"2024-09-15T06:49:54.681794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aa0bd43d5988e1af elected leader aa0bd43d5988e1af at term 3"}
	{"level":"info","ts":"2024-09-15T06:49:54.687453Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aa0bd43d5988e1af","local-member-attributes":"{Name:functional-884523 ClientURLs:[https://192.168.39.88:2379]}","request-path":"/0/members/aa0bd43d5988e1af/attributes","cluster-id":"9f9d2ecdb39156b6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T06:49:54.687469Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:49:54.687693Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:49:54.687908Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T06:49:54.687947Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T06:49:54.688560Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:49:54.688574Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:49:54.689438Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.88:2379"}
	{"level":"info","ts":"2024-09-15T06:49:54.689916Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T06:50:24.551281Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-15T06:50:24.551417Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-884523","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.88:2380"],"advertise-client-urls":["https://192.168.39.88:2379"]}
	{"level":"warn","ts":"2024-09-15T06:50:24.551526Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T06:50:24.551648Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T06:50:24.632866Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.88:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T06:50:24.632963Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.88:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-15T06:50:24.633125Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aa0bd43d5988e1af","current-leader-member-id":"aa0bd43d5988e1af"}
	{"level":"info","ts":"2024-09-15T06:50:24.636408Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-09-15T06:50:24.636537Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-09-15T06:50:24.636576Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-884523","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.88:2380"],"advertise-client-urls":["https://192.168.39.88:2379"]}
	
	
	==> kernel <==
	 07:01:20 up 13 min,  0 users,  load average: 0.17, 0.25, 0.20
	Linux functional-884523 5.10.207 #1 SMP Sun Sep 15 04:48:27 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d6c28d11527dff4a80630af2c48e248fac40d69c57c556b05267a8baecc363a4] <==
	I0915 06:50:38.880576       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0915 06:50:38.880673       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0915 06:50:38.880881       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0915 06:50:38.883387       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0915 06:50:38.888765       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0915 06:50:38.912057       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 06:50:38.929980       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0915 06:50:39.689858       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0915 06:50:40.524501       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0915 06:50:40.552323       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0915 06:50:40.604390       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0915 06:50:40.639242       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0915 06:50:40.649629       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0915 06:50:42.137057       1 controller.go:615] quota admission added evaluator for: endpoints
	I0915 06:50:42.435147       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0915 06:50:58.923339       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.87.18"}
	I0915 06:51:02.740848       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0915 06:51:02.852646       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.217.107"}
	I0915 06:51:03.280011       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.115.131"}
	I0915 06:51:18.772566       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.196.150"}
	I0915 06:51:20.603611       1 controller.go:615] quota admission added evaluator for: namespaces
	I0915 06:51:20.874923       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.84.250"}
	I0915 06:51:20.908996       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.164.138"}
	E0915 06:51:39.138856       1 conn.go:339] Error on socket receive: read tcp 192.168.39.88:8441->192.168.39.1:57986: use of closed network connection
	E0915 06:51:47.570424       1 conn.go:339] Error on socket receive: read tcp 192.168.39.88:8441->192.168.39.1:34938: use of closed network connection
	
	
	==> kube-controller-manager [5366e2b2a502e0c238779a3216f1848315c9ffeed427cdea7896e21f0bdee203] <==
	I0915 06:49:59.321449       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0915 06:49:59.321476       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0915 06:49:59.321497       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0915 06:49:59.321581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-884523"
	I0915 06:49:59.324571       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0915 06:49:59.349642       1 shared_informer.go:320] Caches are synced for taint
	I0915 06:49:59.349702       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0915 06:49:59.349743       1 shared_informer.go:320] Caches are synced for endpoint
	I0915 06:49:59.349774       1 shared_informer.go:320] Caches are synced for HPA
	I0915 06:49:59.349760       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-884523"
	I0915 06:49:59.349949       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0915 06:49:59.361297       1 shared_informer.go:320] Caches are synced for job
	I0915 06:49:59.384631       1 shared_informer.go:320] Caches are synced for persistent volume
	I0915 06:49:59.395753       1 shared_informer.go:320] Caches are synced for PVC protection
	I0915 06:49:59.398223       1 shared_informer.go:320] Caches are synced for expand
	I0915 06:49:59.401678       1 shared_informer.go:320] Caches are synced for attach detach
	I0915 06:49:59.444759       1 shared_informer.go:320] Caches are synced for stateful set
	I0915 06:49:59.450383       1 shared_informer.go:320] Caches are synced for ephemeral
	I0915 06:49:59.499788       1 shared_informer.go:320] Caches are synced for disruption
	I0915 06:49:59.504951       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 06:49:59.505689       1 shared_informer.go:320] Caches are synced for deployment
	I0915 06:49:59.534409       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 06:49:59.931048       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 06:49:59.951062       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 06:49:59.951144       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [b4d88434b26d2509f74d5c57f87240493d2830dc2af7903ad7e1da75c9a7065b] <==
	I0915 06:51:07.870628       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6b9f76b5c7" duration="42.825µs"
	I0915 06:51:18.873600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="40.242326ms"
	I0915 06:51:18.889059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="15.399945ms"
	I0915 06:51:18.889217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="43.243µs"
	I0915 06:51:18.906705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="43.41µs"
	I0915 06:51:20.715867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="58.980332ms"
	E0915 06:51:20.716006       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0915 06:51:20.734645       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="16.244227ms"
	E0915 06:51:20.734701       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0915 06:51:20.745336       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="28.001742ms"
	E0915 06:51:20.745482       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0915 06:51:20.772637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="35.468477ms"
	I0915 06:51:20.793172       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="46.591456ms"
	I0915 06:51:20.809718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="15.706914ms"
	I0915 06:51:20.809894       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="105.825µs"
	I0915 06:51:20.830197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="56.694962ms"
	I0915 06:51:20.843392       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="110.263µs"
	I0915 06:51:20.858747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="28.011985ms"
	I0915 06:51:20.858987       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="93.144µs"
	I0915 06:51:25.227427       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.097383ms"
	I0915 06:51:25.227510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="33.934µs"
	I0915 06:51:32.276143       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="14.145358ms"
	I0915 06:51:32.276436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="94.818µs"
	I0915 06:51:40.419758       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-884523"
	I0915 06:56:46.912019       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-884523"
	
	
	==> kube-proxy [006ecab15de09d8b09b8ae17f2c58cb2b11bb636705a1f4a24243b19c342b46b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 06:50:40.033021       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 06:50:40.043731       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.88"]
	E0915 06:50:40.044281       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:50:40.077299       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 06:50:40.077341       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 06:50:40.077367       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:50:40.079726       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:50:40.079933       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:50:40.079945       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:50:40.081660       1 config.go:199] "Starting service config controller"
	I0915 06:50:40.082754       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:50:40.082815       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:50:40.082833       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:50:40.083611       1 config.go:328] "Starting node config controller"
	I0915 06:50:40.083652       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:50:40.183350       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:50:40.183457       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:50:40.183999       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [cce3a4cccef7a4a42cd6208d8fbf18c497aac2db6af5d881a8308cc6a90f8ec7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 06:49:56.915731       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 06:49:56.938649       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.88"]
	E0915 06:49:56.938702       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:49:57.021183       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 06:49:57.021234       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 06:49:57.021262       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:49:57.026549       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:49:57.026823       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:49:57.026852       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:49:57.030473       1 config.go:199] "Starting service config controller"
	I0915 06:49:57.030505       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:49:57.030523       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:49:57.030527       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:49:57.030920       1 config.go:328] "Starting node config controller"
	I0915 06:49:57.030953       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:49:57.131334       1 shared_informer.go:320] Caches are synced for node config
	I0915 06:49:57.131380       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:49:57.131495       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b8613a9559e555edbc0e118c2df22bf9feed9f74f93302892bab9dd0584346d0] <==
	I0915 06:49:53.903110       1 serving.go:386] Generated self-signed cert in-memory
	W0915 06:49:55.985529       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0915 06:49:55.985573       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0915 06:49:55.985583       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0915 06:49:55.985589       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0915 06:49:56.014619       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0915 06:49:56.014665       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:49:56.016548       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0915 06:49:56.016661       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 06:49:56.016698       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 06:49:56.016712       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 06:49:56.117674       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 06:50:24.546373       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0915 06:50:24.546474       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0915 06:50:24.546691       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cf52958708a33683f537d2b2833ddf1401c79de24d436b008a9fdd77f42761bc] <==
	W0915 06:50:38.769512       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0915 06:50:38.769656       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0915 06:50:38.769687       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0915 06:50:38.769764       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0915 06:50:38.807666       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0915 06:50:38.807806       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:50:38.814773       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0915 06:50:38.816197       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 06:50:38.816244       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 06:50:38.816308       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W0915 06:50:38.824391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:50:38.824446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:50:38.824524       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:50:38.824557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 06:50:38.824642       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 06:50:38.824671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:50:38.824845       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:50:38.825161       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0915 06:50:38.825319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 06:50:38.827150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:50:38.827316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:50:38.827354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:50:38.827321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 06:50:38.827376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0915 06:50:39.716653       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 06:59:45 functional-884523 kubelet[5300]: E0915 06:59:45.584599    5300 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383585584294386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:59:45 functional-884523 kubelet[5300]: E0915 06:59:45.585043    5300 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383585584294386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:59:55 functional-884523 kubelet[5300]: E0915 06:59:55.587532    5300 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383595587005186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:59:55 functional-884523 kubelet[5300]: E0915 06:59:55.587870    5300 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383595587005186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:05 functional-884523 kubelet[5300]: E0915 07:00:05.590340    5300 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383605589987975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:05 functional-884523 kubelet[5300]: E0915 07:00:05.590766    5300 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383605589987975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:15 functional-884523 kubelet[5300]: E0915 07:00:15.593194    5300 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383615592767794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:15 functional-884523 kubelet[5300]: E0915 07:00:15.593272    5300 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383615592767794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:25 functional-884523 kubelet[5300]: E0915 07:00:25.595775    5300 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383625595197617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:25 functional-884523 kubelet[5300]: E0915 07:00:25.596181    5300 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383625595197617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:35 functional-884523 kubelet[5300]: E0915 07:00:35.398407    5300 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 15 07:00:35 functional-884523 kubelet[5300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 15 07:00:35 functional-884523 kubelet[5300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 15 07:00:35 functional-884523 kubelet[5300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 15 07:00:35 functional-884523 kubelet[5300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 15 07:00:35 functional-884523 kubelet[5300]: E0915 07:00:35.597952    5300 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383635597635467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:35 functional-884523 kubelet[5300]: E0915 07:00:35.598001    5300 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383635597635467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:45 functional-884523 kubelet[5300]: E0915 07:00:45.600146    5300 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383645599859406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:45 functional-884523 kubelet[5300]: E0915 07:00:45.600184    5300 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383645599859406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:55 functional-884523 kubelet[5300]: E0915 07:00:55.602240    5300 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383655601434774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:55 functional-884523 kubelet[5300]: E0915 07:00:55.602571    5300 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383655601434774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:01:05 functional-884523 kubelet[5300]: E0915 07:01:05.604902    5300 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383665604543150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:01:05 functional-884523 kubelet[5300]: E0915 07:01:05.605224    5300 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383665604543150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:01:15 functional-884523 kubelet[5300]: E0915 07:01:15.607125    5300 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383675606711693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:01:15 functional-884523 kubelet[5300]: E0915 07:01:15.607162    5300 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383675606711693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [1377f3a3972826e10f3bcd5d471bf54b2a010e763a76b26b2cf6f8c2b24a277b] <==
	2024/09/15 06:51:31 Starting overwatch
	2024/09/15 06:51:31 Using namespace: kubernetes-dashboard
	2024/09/15 06:51:31 Using in-cluster config to connect to apiserver
	2024/09/15 06:51:31 Using secret token for csrf signing
	2024/09/15 06:51:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/15 06:51:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/15 06:51:31 Successful initial request to the apiserver, version: v1.31.1
	2024/09/15 06:51:31 Generating JWE encryption key
	2024/09/15 06:51:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/15 06:51:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/15 06:51:31 Initializing JWE encryption key from synchronized object
	2024/09/15 06:51:31 Creating in-cluster Sidecar client
	2024/09/15 06:51:31 Successful request to sidecar
	2024/09/15 06:51:31 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [54b46d1a60b7633c737551c4e1724658881268161d8f7db3f62925812c607478] <==
	I0915 06:50:39.883988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:50:39.905785       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:50:39.905837       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:50:57.309272       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:50:57.309405       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-884523_a4d8dc5e-68f8-4a0a-a485-1a46feee6319!
	I0915 06:50:57.310608       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"672112a4-f286-417d-aa06-e67a06ac84cf", APIVersion:"v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-884523_a4d8dc5e-68f8-4a0a-a485-1a46feee6319 became leader
	I0915 06:50:57.409968       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-884523_a4d8dc5e-68f8-4a0a-a485-1a46feee6319!
	I0915 06:51:08.857611       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0915 06:51:08.857680       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    bd9419e0-addc-41d8-bd9a-f67f32976d5d 337 0 2024-09-15 06:48:54 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-15 06:48:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-d756a572-eea3-463c-805b-068810a41bb1 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  d756a572-eea3-463c-805b-068810a41bb1 721 0 2024-09-15 06:51:08 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-15 06:51:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-15 06:51:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0915 06:51:08.858244       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-d756a572-eea3-463c-805b-068810a41bb1" provisioned
	I0915 06:51:08.858259       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0915 06:51:08.858267       1 volume_store.go:212] Trying to save persistentvolume "pvc-d756a572-eea3-463c-805b-068810a41bb1"
	I0915 06:51:08.860568       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"d756a572-eea3-463c-805b-068810a41bb1", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0915 06:51:08.882613       1 volume_store.go:219] persistentvolume "pvc-d756a572-eea3-463c-805b-068810a41bb1" saved
	I0915 06:51:08.882723       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"d756a572-eea3-463c-805b-068810a41bb1", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-d756a572-eea3-463c-805b-068810a41bb1
	
	
	==> storage-provisioner [84f87266287fb1c827c0a38310ba828f714912ddbc8f9bbd97b9dafd19b0f7f2] <==
	I0915 06:49:56.821314       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:49:56.840314       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:49:56.840422       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:50:14.250834       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:50:14.250997       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-884523_110a4756-5beb-4703-b446-c1e8ec044ae9!
	I0915 06:50:14.251409       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"672112a4-f286-417d-aa06-e67a06ac84cf", APIVersion:"v1", ResourceVersion:"521", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-884523_110a4756-5beb-4703-b446-c1e8ec044ae9 became leader
	I0915 06:50:14.351155       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-884523_110a4756-5beb-4703-b446-c1e8ec044ae9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-884523 -n functional-884523
helpers_test.go:261: (dbg) Run:  kubectl --context functional-884523 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-v4xlm
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-884523 describe pod busybox-mount mysql-6cdb49bbb-v4xlm
helpers_test.go:282: (dbg) kubectl --context functional-884523 describe pod busybox-mount mysql-6cdb49bbb-v4xlm:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-884523/192.168.39.88
	Start Time:       Sun, 15 Sep 2024 06:51:07 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://3ef0045ca2b71af567f74b0b87c4eecc39166804cd9eb84251e0996ab9e9e154
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 15 Sep 2024 06:51:11 +0000
	      Finished:     Sun, 15 Sep 2024 06:51:11 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6d9xf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-6d9xf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-884523
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.826s (3.826s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-v4xlm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-884523/192.168.39.88
	Start Time:       Sun, 15 Sep 2024 06:51:18 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tm74k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tm74k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/mysql-6cdb49bbb-v4xlm to functional-884523

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 node stop m02 -v=7 --alsologtostderr
E0915 07:06:03.008702   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:03.329997   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:03.972088   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:05.253622   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:07.815439   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:12.937471   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:23.179203   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:43.661203   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:07:24.623056   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:07:56.196405   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670527 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.456563679s)

                                                
                                                
-- stdout --
	* Stopping node "ha-670527-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:06:02.941317   30840 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:06:02.941451   30840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:06:02.941459   30840 out.go:358] Setting ErrFile to fd 2...
	I0915 07:06:02.941464   30840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:06:02.941633   30840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:06:02.941898   30840 mustload.go:65] Loading cluster: ha-670527
	I0915 07:06:02.942278   30840 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:06:02.942293   30840 stop.go:39] StopHost: ha-670527-m02
	I0915 07:06:02.942662   30840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:06:02.942694   30840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:06:02.957397   30840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I0915 07:06:02.957865   30840 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:06:02.958468   30840 main.go:141] libmachine: Using API Version  1
	I0915 07:06:02.958496   30840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:06:02.958796   30840 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:06:02.961306   30840 out.go:177] * Stopping node "ha-670527-m02"  ...
	I0915 07:06:02.962688   30840 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0915 07:06:02.962708   30840 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:06:02.962893   30840 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0915 07:06:02.962922   30840 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:06:02.965428   30840 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:06:02.965857   30840 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:06:02.965883   30840 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:06:02.966003   30840 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:06:02.966157   30840 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:06:02.966281   30840 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:06:02.966425   30840 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	I0915 07:06:03.053797   30840 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0915 07:06:03.109282   30840 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0915 07:06:03.165911   30840 main.go:141] libmachine: Stopping "ha-670527-m02"...
	I0915 07:06:03.165944   30840 main.go:141] libmachine: (ha-670527-m02) Calling .GetState
	I0915 07:06:03.167550   30840 main.go:141] libmachine: (ha-670527-m02) Calling .Stop
	I0915 07:06:03.170708   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 0/120
	I0915 07:06:04.172198   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 1/120
	I0915 07:06:05.174222   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 2/120
	I0915 07:06:06.176097   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 3/120
	I0915 07:06:07.177393   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 4/120
	I0915 07:06:08.179318   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 5/120
	I0915 07:06:09.180743   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 6/120
	I0915 07:06:10.182035   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 7/120
	I0915 07:06:11.184234   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 8/120
	I0915 07:06:12.185345   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 9/120
	I0915 07:06:13.187553   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 10/120
	I0915 07:06:14.188962   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 11/120
	I0915 07:06:15.190535   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 12/120
	I0915 07:06:16.192267   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 13/120
	I0915 07:06:17.193462   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 14/120
	I0915 07:06:18.195555   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 15/120
	I0915 07:06:19.197481   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 16/120
	I0915 07:06:20.199064   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 17/120
	I0915 07:06:21.200289   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 18/120
	I0915 07:06:22.201657   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 19/120
	I0915 07:06:23.203629   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 20/120
	I0915 07:06:24.205427   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 21/120
	I0915 07:06:25.206834   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 22/120
	I0915 07:06:26.208283   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 23/120
	I0915 07:06:27.209691   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 24/120
	I0915 07:06:28.211500   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 25/120
	I0915 07:06:29.213690   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 26/120
	I0915 07:06:30.214913   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 27/120
	I0915 07:06:31.216214   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 28/120
	I0915 07:06:32.217392   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 29/120
	I0915 07:06:33.219528   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 30/120
	I0915 07:06:34.220677   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 31/120
	I0915 07:06:35.222776   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 32/120
	I0915 07:06:36.223957   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 33/120
	I0915 07:06:37.225213   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 34/120
	I0915 07:06:38.226955   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 35/120
	I0915 07:06:39.228454   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 36/120
	I0915 07:06:40.230202   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 37/120
	I0915 07:06:41.232086   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 38/120
	I0915 07:06:42.233515   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 39/120
	I0915 07:06:43.235484   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 40/120
	I0915 07:06:44.236652   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 41/120
	I0915 07:06:45.238485   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 42/120
	I0915 07:06:46.239618   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 43/120
	I0915 07:06:47.241120   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 44/120
	I0915 07:06:48.242914   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 45/120
	I0915 07:06:49.244183   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 46/120
	I0915 07:06:50.245612   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 47/120
	I0915 07:06:51.246763   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 48/120
	I0915 07:06:52.248331   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 49/120
	I0915 07:06:53.250076   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 50/120
	I0915 07:06:54.251587   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 51/120
	I0915 07:06:55.252941   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 52/120
	I0915 07:06:56.254215   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 53/120
	I0915 07:06:57.255288   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 54/120
	I0915 07:06:58.256822   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 55/120
	I0915 07:06:59.258275   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 56/120
	I0915 07:07:00.260510   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 57/120
	I0915 07:07:01.261949   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 58/120
	I0915 07:07:02.263275   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 59/120
	I0915 07:07:03.265019   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 60/120
	I0915 07:07:04.266932   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 61/120
	I0915 07:07:05.268294   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 62/120
	I0915 07:07:06.269683   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 63/120
	I0915 07:07:07.271038   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 64/120
	I0915 07:07:08.272871   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 65/120
	I0915 07:07:09.274128   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 66/120
	I0915 07:07:10.275576   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 67/120
	I0915 07:07:11.276824   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 68/120
	I0915 07:07:12.278204   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 69/120
	I0915 07:07:13.280199   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 70/120
	I0915 07:07:14.281476   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 71/120
	I0915 07:07:15.282741   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 72/120
	I0915 07:07:16.284162   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 73/120
	I0915 07:07:17.285440   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 74/120
	I0915 07:07:18.287075   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 75/120
	I0915 07:07:19.288412   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 76/120
	I0915 07:07:20.289727   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 77/120
	I0915 07:07:21.291081   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 78/120
	I0915 07:07:22.292484   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 79/120
	I0915 07:07:23.294504   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 80/120
	I0915 07:07:24.296534   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 81/120
	I0915 07:07:25.298136   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 82/120
	I0915 07:07:26.300366   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 83/120
	I0915 07:07:27.301776   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 84/120
	I0915 07:07:28.303764   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 85/120
	I0915 07:07:29.304938   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 86/120
	I0915 07:07:30.306453   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 87/120
	I0915 07:07:31.308580   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 88/120
	I0915 07:07:32.310090   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 89/120
	I0915 07:07:33.312508   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 90/120
	I0915 07:07:34.313799   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 91/120
	I0915 07:07:35.315453   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 92/120
	I0915 07:07:36.316916   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 93/120
	I0915 07:07:37.318531   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 94/120
	I0915 07:07:38.320290   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 95/120
	I0915 07:07:39.321465   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 96/120
	I0915 07:07:40.322986   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 97/120
	I0915 07:07:41.324872   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 98/120
	I0915 07:07:42.326509   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 99/120
	I0915 07:07:43.328314   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 100/120
	I0915 07:07:44.329557   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 101/120
	I0915 07:07:45.330798   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 102/120
	I0915 07:07:46.332090   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 103/120
	I0915 07:07:47.333329   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 104/120
	I0915 07:07:48.335405   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 105/120
	I0915 07:07:49.336762   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 106/120
	I0915 07:07:50.338068   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 107/120
	I0915 07:07:51.339346   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 108/120
	I0915 07:07:52.340713   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 109/120
	I0915 07:07:53.342666   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 110/120
	I0915 07:07:54.344079   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 111/120
	I0915 07:07:55.345322   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 112/120
	I0915 07:07:56.346712   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 113/120
	I0915 07:07:57.348020   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 114/120
	I0915 07:07:58.350085   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 115/120
	I0915 07:07:59.351385   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 116/120
	I0915 07:08:00.352716   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 117/120
	I0915 07:08:01.354213   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 118/120
	I0915 07:08:02.355816   30840 main.go:141] libmachine: (ha-670527-m02) Waiting for machine to stop 119/120
	I0915 07:08:03.356522   30840 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0915 07:08:03.356647   30840 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-670527 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr: exit status 3 (19.229519987s)

                                                
                                                
-- stdout --
	ha-670527
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-670527-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:08:03.397289   31254 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:08:03.397535   31254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:08:03.397545   31254 out.go:358] Setting ErrFile to fd 2...
	I0915 07:08:03.397551   31254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:08:03.397755   31254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:08:03.397962   31254 out.go:352] Setting JSON to false
	I0915 07:08:03.397998   31254 mustload.go:65] Loading cluster: ha-670527
	I0915 07:08:03.398088   31254 notify.go:220] Checking for updates...
	I0915 07:08:03.398442   31254 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:08:03.398459   31254 status.go:255] checking status of ha-670527 ...
	I0915 07:08:03.398871   31254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:03.398941   31254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:03.418448   31254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35719
	I0915 07:08:03.418846   31254 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:03.419368   31254 main.go:141] libmachine: Using API Version  1
	I0915 07:08:03.419388   31254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:03.419810   31254 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:03.420006   31254 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:08:03.421768   31254 status.go:330] ha-670527 host status = "Running" (err=<nil>)
	I0915 07:08:03.421783   31254 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:08:03.422090   31254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:03.422134   31254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:03.436577   31254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33085
	I0915 07:08:03.437029   31254 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:03.437528   31254 main.go:141] libmachine: Using API Version  1
	I0915 07:08:03.437550   31254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:03.437929   31254 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:03.438099   31254 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:08:03.441232   31254 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:03.441686   31254 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:08:03.441706   31254 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:03.441862   31254 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:08:03.442349   31254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:03.442408   31254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:03.457734   31254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39881
	I0915 07:08:03.458138   31254 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:03.458582   31254 main.go:141] libmachine: Using API Version  1
	I0915 07:08:03.458600   31254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:03.458912   31254 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:03.459081   31254 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:08:03.459305   31254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:03.459339   31254 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:08:03.462001   31254 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:03.462376   31254 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:08:03.462404   31254 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:03.462527   31254 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:08:03.462659   31254 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:08:03.462787   31254 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:08:03.462894   31254 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:08:03.552245   31254 ssh_runner.go:195] Run: systemctl --version
	I0915 07:08:03.560825   31254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:03.576998   31254 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:08:03.577037   31254 api_server.go:166] Checking apiserver status ...
	I0915 07:08:03.577068   31254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:08:03.599614   31254 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0915 07:08:03.611408   31254 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:08:03.611482   31254 ssh_runner.go:195] Run: ls
	I0915 07:08:03.616029   31254 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:08:03.620405   31254 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:08:03.620436   31254 status.go:422] ha-670527 apiserver status = Running (err=<nil>)
	I0915 07:08:03.620448   31254 status.go:257] ha-670527 status: &{Name:ha-670527 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:08:03.620470   31254 status.go:255] checking status of ha-670527-m02 ...
	I0915 07:08:03.620760   31254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:03.620790   31254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:03.635462   31254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0915 07:08:03.635872   31254 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:03.636382   31254 main.go:141] libmachine: Using API Version  1
	I0915 07:08:03.636406   31254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:03.636689   31254 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:03.636877   31254 main.go:141] libmachine: (ha-670527-m02) Calling .GetState
	I0915 07:08:03.638469   31254 status.go:330] ha-670527-m02 host status = "Running" (err=<nil>)
	I0915 07:08:03.638486   31254 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:08:03.638762   31254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:03.638794   31254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:03.653927   31254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45507
	I0915 07:08:03.654392   31254 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:03.654835   31254 main.go:141] libmachine: Using API Version  1
	I0915 07:08:03.654852   31254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:03.655154   31254 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:03.655350   31254 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:08:03.658229   31254 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:03.658649   31254 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:08:03.658674   31254 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:03.658825   31254 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:08:03.659243   31254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:03.659286   31254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:03.673569   31254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34203
	I0915 07:08:03.674013   31254 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:03.674474   31254 main.go:141] libmachine: Using API Version  1
	I0915 07:08:03.674489   31254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:03.674793   31254 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:03.674956   31254 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:08:03.675132   31254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:03.675150   31254 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:08:03.677530   31254 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:03.677936   31254 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:08:03.677967   31254 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:03.678097   31254 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:08:03.678252   31254 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:08:03.678422   31254 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:08:03.678570   31254 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	W0915 07:08:22.230006   31254 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0915 07:08:22.230105   31254 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0915 07:08:22.230149   31254 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:08:22.230161   31254 status.go:257] ha-670527-m02 status: &{Name:ha-670527-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0915 07:08:22.230184   31254 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:08:22.230192   31254 status.go:255] checking status of ha-670527-m03 ...
	I0915 07:08:22.230503   31254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:22.230548   31254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:22.246052   31254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36133
	I0915 07:08:22.246449   31254 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:22.246925   31254 main.go:141] libmachine: Using API Version  1
	I0915 07:08:22.246946   31254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:22.247281   31254 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:22.247437   31254 main.go:141] libmachine: (ha-670527-m03) Calling .GetState
	I0915 07:08:22.248968   31254 status.go:330] ha-670527-m03 host status = "Running" (err=<nil>)
	I0915 07:08:22.248983   31254 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:08:22.249269   31254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:22.249317   31254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:22.265657   31254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35447
	I0915 07:08:22.266068   31254 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:22.266471   31254 main.go:141] libmachine: Using API Version  1
	I0915 07:08:22.266491   31254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:22.266780   31254 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:22.266952   31254 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:08:22.269444   31254 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:22.269820   31254 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:08:22.269855   31254 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:22.269956   31254 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:08:22.270291   31254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:22.270330   31254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:22.284768   31254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34909
	I0915 07:08:22.285238   31254 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:22.285708   31254 main.go:141] libmachine: Using API Version  1
	I0915 07:08:22.285727   31254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:22.286018   31254 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:22.286199   31254 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:08:22.286340   31254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:22.286364   31254 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:08:22.289020   31254 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:22.289505   31254 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:08:22.289535   31254 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:22.289677   31254 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:08:22.289827   31254 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:08:22.289934   31254 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:08:22.290068   31254 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:08:22.374518   31254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:22.392842   31254 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:08:22.392873   31254 api_server.go:166] Checking apiserver status ...
	I0915 07:08:22.392912   31254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:08:22.407524   31254 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W0915 07:08:22.419535   31254 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:08:22.419607   31254 ssh_runner.go:195] Run: ls
	I0915 07:08:22.423999   31254 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:08:22.428944   31254 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:08:22.428963   31254 status.go:422] ha-670527-m03 apiserver status = Running (err=<nil>)
	I0915 07:08:22.428971   31254 status.go:257] ha-670527-m03 status: &{Name:ha-670527-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:08:22.428985   31254 status.go:255] checking status of ha-670527-m04 ...
	I0915 07:08:22.429289   31254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:22.429324   31254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:22.446388   31254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0915 07:08:22.446712   31254 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:22.447203   31254 main.go:141] libmachine: Using API Version  1
	I0915 07:08:22.447225   31254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:22.447546   31254 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:22.447720   31254 main.go:141] libmachine: (ha-670527-m04) Calling .GetState
	I0915 07:08:22.449074   31254 status.go:330] ha-670527-m04 host status = "Running" (err=<nil>)
	I0915 07:08:22.449087   31254 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:08:22.449356   31254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:22.449389   31254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:22.463934   31254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43393
	I0915 07:08:22.464335   31254 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:22.464748   31254 main.go:141] libmachine: Using API Version  1
	I0915 07:08:22.464771   31254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:22.465072   31254 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:22.465232   31254 main.go:141] libmachine: (ha-670527-m04) Calling .GetIP
	I0915 07:08:22.468058   31254 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:22.468447   31254 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:08:22.468476   31254 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:22.468611   31254 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:08:22.468976   31254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:22.469020   31254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:22.483508   31254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40853
	I0915 07:08:22.483994   31254 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:22.484506   31254 main.go:141] libmachine: Using API Version  1
	I0915 07:08:22.484528   31254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:22.484826   31254 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:22.484999   31254 main.go:141] libmachine: (ha-670527-m04) Calling .DriverName
	I0915 07:08:22.485161   31254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:22.485192   31254 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHHostname
	I0915 07:08:22.487720   31254 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:22.488124   31254 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:08:22.488149   31254 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:22.488282   31254 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHPort
	I0915 07:08:22.488424   31254 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHKeyPath
	I0915 07:08:22.488551   31254 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHUsername
	I0915 07:08:22.488673   31254 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m04/id_rsa Username:docker}
	I0915 07:08:22.570449   31254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:22.585939   31254 status.go:257] ha-670527-m04 status: &{Name:ha-670527-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-670527 -n ha-670527
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-670527 logs -n 25: (1.370541048s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-670527 cp ha-670527-m03:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2302607583/001/cp-test_ha-670527-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m03:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527:/home/docker/cp-test_ha-670527-m03_ha-670527.txt                       |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527 sudo cat                                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | /home/docker/cp-test_ha-670527-m03_ha-670527.txt                                 |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m03:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m02:/home/docker/cp-test_ha-670527-m03_ha-670527-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m02 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | /home/docker/cp-test_ha-670527-m03_ha-670527-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m03:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m04:/home/docker/cp-test_ha-670527-m03_ha-670527-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m04 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | /home/docker/cp-test_ha-670527-m03_ha-670527-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-670527 cp testdata/cp-test.txt                                                | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2302607583/001/cp-test_ha-670527-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527:/home/docker/cp-test_ha-670527-m04_ha-670527.txt                       |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527 sudo cat                                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-670527-m04_ha-670527.txt                                 |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m02:/home/docker/cp-test_ha-670527-m04_ha-670527-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m02 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-670527-m04_ha-670527-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m03:/home/docker/cp-test_ha-670527-m04_ha-670527-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m03 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-670527-m04_ha-670527-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-670527 node stop m02 -v=7                                                     | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 07:01:22
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 07:01:22.338266   26835 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:01:22.338515   26835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:01:22.338525   26835 out.go:358] Setting ErrFile to fd 2...
	I0915 07:01:22.338532   26835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:01:22.338738   26835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:01:22.339316   26835 out.go:352] Setting JSON to false
	I0915 07:01:22.340214   26835 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2628,"bootTime":1726381054,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 07:01:22.340315   26835 start.go:139] virtualization: kvm guest
	I0915 07:01:22.342433   26835 out.go:177] * [ha-670527] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 07:01:22.343626   26835 notify.go:220] Checking for updates...
	I0915 07:01:22.343686   26835 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:01:22.344812   26835 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:01:22.346115   26835 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:01:22.347411   26835 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:01:22.348750   26835 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 07:01:22.349955   26835 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:01:22.351099   26835 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:01:22.384814   26835 out.go:177] * Using the kvm2 driver based on user configuration
	I0915 07:01:22.386050   26835 start.go:297] selected driver: kvm2
	I0915 07:01:22.386063   26835 start.go:901] validating driver "kvm2" against <nil>
	I0915 07:01:22.386074   26835 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:01:22.386776   26835 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:01:22.386846   26835 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 07:01:22.401115   26835 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 07:01:22.401164   26835 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 07:01:22.401477   26835 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:01:22.401519   26835 cni.go:84] Creating CNI manager for ""
	I0915 07:01:22.401575   26835 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0915 07:01:22.401585   26835 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0915 07:01:22.401663   26835 start.go:340] cluster config:
	{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0915 07:01:22.401928   26835 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:01:22.404211   26835 out.go:177] * Starting "ha-670527" primary control-plane node in "ha-670527" cluster
	I0915 07:01:22.405703   26835 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:01:22.405735   26835 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 07:01:22.405743   26835 cache.go:56] Caching tarball of preloaded images
	I0915 07:01:22.405833   26835 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:01:22.405846   26835 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:01:22.406152   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:01:22.406173   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json: {Name:mkf802eeadbffbfc049e41868d31a8e27df1da7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:22.406318   26835 start.go:360] acquireMachinesLock for ha-670527: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:01:22.406357   26835 start.go:364] duration metric: took 18.446µs to acquireMachinesLock for "ha-670527"
	I0915 07:01:22.406374   26835 start.go:93] Provisioning new machine with config: &{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:01:22.406438   26835 start.go:125] createHost starting for "" (driver="kvm2")
	I0915 07:01:22.408103   26835 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 07:01:22.408239   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:01:22.408284   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:01:22.422470   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0915 07:01:22.422913   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:01:22.423449   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:01:22.423469   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:01:22.423853   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:01:22.424040   26835 main.go:141] libmachine: (ha-670527) Calling .GetMachineName
	I0915 07:01:22.424172   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:22.424333   26835 start.go:159] libmachine.API.Create for "ha-670527" (driver="kvm2")
	I0915 07:01:22.424365   26835 client.go:168] LocalClient.Create starting
	I0915 07:01:22.424409   26835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem
	I0915 07:01:22.424444   26835 main.go:141] libmachine: Decoding PEM data...
	I0915 07:01:22.424460   26835 main.go:141] libmachine: Parsing certificate...
	I0915 07:01:22.424514   26835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem
	I0915 07:01:22.424531   26835 main.go:141] libmachine: Decoding PEM data...
	I0915 07:01:22.424544   26835 main.go:141] libmachine: Parsing certificate...
	I0915 07:01:22.424556   26835 main.go:141] libmachine: Running pre-create checks...
	I0915 07:01:22.424568   26835 main.go:141] libmachine: (ha-670527) Calling .PreCreateCheck
	I0915 07:01:22.424948   26835 main.go:141] libmachine: (ha-670527) Calling .GetConfigRaw
	I0915 07:01:22.425340   26835 main.go:141] libmachine: Creating machine...
	I0915 07:01:22.425353   26835 main.go:141] libmachine: (ha-670527) Calling .Create
	I0915 07:01:22.425518   26835 main.go:141] libmachine: (ha-670527) Creating KVM machine...
	I0915 07:01:22.426896   26835 main.go:141] libmachine: (ha-670527) DBG | found existing default KVM network
	I0915 07:01:22.427571   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:22.427424   26858 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0915 07:01:22.427618   26835 main.go:141] libmachine: (ha-670527) DBG | created network xml: 
	I0915 07:01:22.427640   26835 main.go:141] libmachine: (ha-670527) DBG | <network>
	I0915 07:01:22.427654   26835 main.go:141] libmachine: (ha-670527) DBG |   <name>mk-ha-670527</name>
	I0915 07:01:22.427664   26835 main.go:141] libmachine: (ha-670527) DBG |   <dns enable='no'/>
	I0915 07:01:22.427687   26835 main.go:141] libmachine: (ha-670527) DBG |   
	I0915 07:01:22.427700   26835 main.go:141] libmachine: (ha-670527) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0915 07:01:22.427707   26835 main.go:141] libmachine: (ha-670527) DBG |     <dhcp>
	I0915 07:01:22.427718   26835 main.go:141] libmachine: (ha-670527) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0915 07:01:22.427726   26835 main.go:141] libmachine: (ha-670527) DBG |     </dhcp>
	I0915 07:01:22.427739   26835 main.go:141] libmachine: (ha-670527) DBG |   </ip>
	I0915 07:01:22.427747   26835 main.go:141] libmachine: (ha-670527) DBG |   
	I0915 07:01:22.427752   26835 main.go:141] libmachine: (ha-670527) DBG | </network>
	I0915 07:01:22.427764   26835 main.go:141] libmachine: (ha-670527) DBG | 
	I0915 07:01:22.432551   26835 main.go:141] libmachine: (ha-670527) DBG | trying to create private KVM network mk-ha-670527 192.168.39.0/24...
	I0915 07:01:22.495364   26835 main.go:141] libmachine: (ha-670527) DBG | private KVM network mk-ha-670527 192.168.39.0/24 created
	I0915 07:01:22.495398   26835 main.go:141] libmachine: (ha-670527) Setting up store path in /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527 ...
	I0915 07:01:22.495423   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:22.495344   26858 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:01:22.495440   26835 main.go:141] libmachine: (ha-670527) Building disk image from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso
	I0915 07:01:22.495519   26835 main.go:141] libmachine: (ha-670527) Downloading /home/jenkins/minikube-integration/19644-6166/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso...
	I0915 07:01:22.742568   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:22.742430   26858 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa...
	I0915 07:01:22.978699   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:22.978563   26858 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/ha-670527.rawdisk...
	I0915 07:01:22.978729   26835 main.go:141] libmachine: (ha-670527) DBG | Writing magic tar header
	I0915 07:01:22.978738   26835 main.go:141] libmachine: (ha-670527) DBG | Writing SSH key tar header
	I0915 07:01:22.978745   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:22.978695   26858 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527 ...
	I0915 07:01:22.978895   26835 main.go:141] libmachine: (ha-670527) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527 (perms=drwx------)
	I0915 07:01:22.978922   26835 main.go:141] libmachine: (ha-670527) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527
	I0915 07:01:22.978933   26835 main.go:141] libmachine: (ha-670527) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines (perms=drwxr-xr-x)
	I0915 07:01:22.978949   26835 main.go:141] libmachine: (ha-670527) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube (perms=drwxr-xr-x)
	I0915 07:01:22.978975   26835 main.go:141] libmachine: (ha-670527) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines
	I0915 07:01:22.978987   26835 main.go:141] libmachine: (ha-670527) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166 (perms=drwxrwxr-x)
	I0915 07:01:22.978994   26835 main.go:141] libmachine: (ha-670527) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0915 07:01:22.979002   26835 main.go:141] libmachine: (ha-670527) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0915 07:01:22.979011   26835 main.go:141] libmachine: (ha-670527) Creating domain...
	I0915 07:01:22.979025   26835 main.go:141] libmachine: (ha-670527) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:01:22.979041   26835 main.go:141] libmachine: (ha-670527) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166
	I0915 07:01:22.979057   26835 main.go:141] libmachine: (ha-670527) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0915 07:01:22.979068   26835 main.go:141] libmachine: (ha-670527) DBG | Checking permissions on dir: /home/jenkins
	I0915 07:01:22.979079   26835 main.go:141] libmachine: (ha-670527) DBG | Checking permissions on dir: /home
	I0915 07:01:22.979090   26835 main.go:141] libmachine: (ha-670527) DBG | Skipping /home - not owner
	I0915 07:01:22.980081   26835 main.go:141] libmachine: (ha-670527) define libvirt domain using xml: 
	I0915 07:01:22.980126   26835 main.go:141] libmachine: (ha-670527) <domain type='kvm'>
	I0915 07:01:22.980136   26835 main.go:141] libmachine: (ha-670527)   <name>ha-670527</name>
	I0915 07:01:22.980143   26835 main.go:141] libmachine: (ha-670527)   <memory unit='MiB'>2200</memory>
	I0915 07:01:22.980148   26835 main.go:141] libmachine: (ha-670527)   <vcpu>2</vcpu>
	I0915 07:01:22.980154   26835 main.go:141] libmachine: (ha-670527)   <features>
	I0915 07:01:22.980159   26835 main.go:141] libmachine: (ha-670527)     <acpi/>
	I0915 07:01:22.980166   26835 main.go:141] libmachine: (ha-670527)     <apic/>
	I0915 07:01:22.980171   26835 main.go:141] libmachine: (ha-670527)     <pae/>
	I0915 07:01:22.980180   26835 main.go:141] libmachine: (ha-670527)     
	I0915 07:01:22.980186   26835 main.go:141] libmachine: (ha-670527)   </features>
	I0915 07:01:22.980191   26835 main.go:141] libmachine: (ha-670527)   <cpu mode='host-passthrough'>
	I0915 07:01:22.980198   26835 main.go:141] libmachine: (ha-670527)   
	I0915 07:01:22.980202   26835 main.go:141] libmachine: (ha-670527)   </cpu>
	I0915 07:01:22.980206   26835 main.go:141] libmachine: (ha-670527)   <os>
	I0915 07:01:22.980210   26835 main.go:141] libmachine: (ha-670527)     <type>hvm</type>
	I0915 07:01:22.980214   26835 main.go:141] libmachine: (ha-670527)     <boot dev='cdrom'/>
	I0915 07:01:22.980220   26835 main.go:141] libmachine: (ha-670527)     <boot dev='hd'/>
	I0915 07:01:22.980224   26835 main.go:141] libmachine: (ha-670527)     <bootmenu enable='no'/>
	I0915 07:01:22.980230   26835 main.go:141] libmachine: (ha-670527)   </os>
	I0915 07:01:22.980234   26835 main.go:141] libmachine: (ha-670527)   <devices>
	I0915 07:01:22.980240   26835 main.go:141] libmachine: (ha-670527)     <disk type='file' device='cdrom'>
	I0915 07:01:22.980266   26835 main.go:141] libmachine: (ha-670527)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/boot2docker.iso'/>
	I0915 07:01:22.980293   26835 main.go:141] libmachine: (ha-670527)       <target dev='hdc' bus='scsi'/>
	I0915 07:01:22.980315   26835 main.go:141] libmachine: (ha-670527)       <readonly/>
	I0915 07:01:22.980335   26835 main.go:141] libmachine: (ha-670527)     </disk>
	I0915 07:01:22.980359   26835 main.go:141] libmachine: (ha-670527)     <disk type='file' device='disk'>
	I0915 07:01:22.980379   26835 main.go:141] libmachine: (ha-670527)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0915 07:01:22.980394   26835 main.go:141] libmachine: (ha-670527)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/ha-670527.rawdisk'/>
	I0915 07:01:22.980418   26835 main.go:141] libmachine: (ha-670527)       <target dev='hda' bus='virtio'/>
	I0915 07:01:22.980430   26835 main.go:141] libmachine: (ha-670527)     </disk>
	I0915 07:01:22.980439   26835 main.go:141] libmachine: (ha-670527)     <interface type='network'>
	I0915 07:01:22.980451   26835 main.go:141] libmachine: (ha-670527)       <source network='mk-ha-670527'/>
	I0915 07:01:22.980460   26835 main.go:141] libmachine: (ha-670527)       <model type='virtio'/>
	I0915 07:01:22.980468   26835 main.go:141] libmachine: (ha-670527)     </interface>
	I0915 07:01:22.980477   26835 main.go:141] libmachine: (ha-670527)     <interface type='network'>
	I0915 07:01:22.980484   26835 main.go:141] libmachine: (ha-670527)       <source network='default'/>
	I0915 07:01:22.980516   26835 main.go:141] libmachine: (ha-670527)       <model type='virtio'/>
	I0915 07:01:22.980533   26835 main.go:141] libmachine: (ha-670527)     </interface>
	I0915 07:01:22.980545   26835 main.go:141] libmachine: (ha-670527)     <serial type='pty'>
	I0915 07:01:22.980554   26835 main.go:141] libmachine: (ha-670527)       <target port='0'/>
	I0915 07:01:22.980562   26835 main.go:141] libmachine: (ha-670527)     </serial>
	I0915 07:01:22.980575   26835 main.go:141] libmachine: (ha-670527)     <console type='pty'>
	I0915 07:01:22.980590   26835 main.go:141] libmachine: (ha-670527)       <target type='serial' port='0'/>
	I0915 07:01:22.980603   26835 main.go:141] libmachine: (ha-670527)     </console>
	I0915 07:01:22.980615   26835 main.go:141] libmachine: (ha-670527)     <rng model='virtio'>
	I0915 07:01:22.980626   26835 main.go:141] libmachine: (ha-670527)       <backend model='random'>/dev/random</backend>
	I0915 07:01:22.980636   26835 main.go:141] libmachine: (ha-670527)     </rng>
	I0915 07:01:22.980641   26835 main.go:141] libmachine: (ha-670527)     
	I0915 07:01:22.980653   26835 main.go:141] libmachine: (ha-670527)     
	I0915 07:01:22.980668   26835 main.go:141] libmachine: (ha-670527)   </devices>
	I0915 07:01:22.980677   26835 main.go:141] libmachine: (ha-670527) </domain>
	I0915 07:01:22.980681   26835 main.go:141] libmachine: (ha-670527) 
	I0915 07:01:22.984907   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:0b:b1:eb in network default
	I0915 07:01:22.985523   26835 main.go:141] libmachine: (ha-670527) Ensuring networks are active...
	I0915 07:01:22.985551   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:22.986143   26835 main.go:141] libmachine: (ha-670527) Ensuring network default is active
	I0915 07:01:22.986388   26835 main.go:141] libmachine: (ha-670527) Ensuring network mk-ha-670527 is active
	I0915 07:01:22.986851   26835 main.go:141] libmachine: (ha-670527) Getting domain xml...
	I0915 07:01:22.987441   26835 main.go:141] libmachine: (ha-670527) Creating domain...
	I0915 07:01:24.166128   26835 main.go:141] libmachine: (ha-670527) Waiting to get IP...
	I0915 07:01:24.166896   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:24.167250   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:24.167284   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:24.167233   26858 retry.go:31] will retry after 188.706653ms: waiting for machine to come up
	I0915 07:01:24.357578   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:24.358062   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:24.358210   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:24.358012   26858 retry.go:31] will retry after 260.220734ms: waiting for machine to come up
	I0915 07:01:24.619321   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:24.619779   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:24.619799   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:24.619736   26858 retry.go:31] will retry after 363.224901ms: waiting for machine to come up
	I0915 07:01:24.984128   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:24.984569   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:24.984613   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:24.984532   26858 retry.go:31] will retry after 535.952621ms: waiting for machine to come up
	I0915 07:01:25.522277   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:25.522767   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:25.522795   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:25.522719   26858 retry.go:31] will retry after 645.876747ms: waiting for machine to come up
	I0915 07:01:26.170487   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:26.170857   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:26.170900   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:26.170818   26858 retry.go:31] will retry after 846.64448ms: waiting for machine to come up
	I0915 07:01:27.018803   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:27.019226   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:27.019268   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:27.019128   26858 retry.go:31] will retry after 1.180309168s: waiting for machine to come up
	I0915 07:01:28.200567   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:28.201022   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:28.201053   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:28.200967   26858 retry.go:31] will retry after 988.422962ms: waiting for machine to come up
	I0915 07:01:29.191077   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:29.191473   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:29.191495   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:29.191434   26858 retry.go:31] will retry after 1.502324093s: waiting for machine to come up
	I0915 07:01:30.696077   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:30.696438   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:30.696459   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:30.696405   26858 retry.go:31] will retry after 1.467846046s: waiting for machine to come up
	I0915 07:01:32.166170   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:32.166717   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:32.166748   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:32.166644   26858 retry.go:31] will retry after 1.903254759s: waiting for machine to come up
	I0915 07:01:34.071613   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:34.072111   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:34.072132   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:34.072065   26858 retry.go:31] will retry after 2.570486979s: waiting for machine to come up
	I0915 07:01:36.645795   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:36.646237   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:36.646252   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:36.646204   26858 retry.go:31] will retry after 3.887633246s: waiting for machine to come up
	I0915 07:01:40.537825   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:40.538226   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:40.538256   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:40.538205   26858 retry.go:31] will retry after 4.090180911s: waiting for machine to come up
	I0915 07:01:44.630705   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.631138   26835 main.go:141] libmachine: (ha-670527) Found IP for machine: 192.168.39.54
	I0915 07:01:44.631163   26835 main.go:141] libmachine: (ha-670527) Reserving static IP address...
	I0915 07:01:44.631176   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has current primary IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.631529   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find host DHCP lease matching {name: "ha-670527", mac: "52:54:00:c3:49:88", ip: "192.168.39.54"} in network mk-ha-670527
	I0915 07:01:44.700421   26835 main.go:141] libmachine: (ha-670527) DBG | Getting to WaitForSSH function...
	I0915 07:01:44.700452   26835 main.go:141] libmachine: (ha-670527) Reserved static IP address: 192.168.39.54
	I0915 07:01:44.700463   26835 main.go:141] libmachine: (ha-670527) Waiting for SSH to be available...
	I0915 07:01:44.702979   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.703360   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:44.703397   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.703558   26835 main.go:141] libmachine: (ha-670527) DBG | Using SSH client type: external
	I0915 07:01:44.703586   26835 main.go:141] libmachine: (ha-670527) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa (-rw-------)
	I0915 07:01:44.703613   26835 main.go:141] libmachine: (ha-670527) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 07:01:44.703624   26835 main.go:141] libmachine: (ha-670527) DBG | About to run SSH command:
	I0915 07:01:44.703638   26835 main.go:141] libmachine: (ha-670527) DBG | exit 0
	I0915 07:01:44.829685   26835 main.go:141] libmachine: (ha-670527) DBG | SSH cmd err, output: <nil>: 
	I0915 07:01:44.829964   26835 main.go:141] libmachine: (ha-670527) KVM machine creation complete!
	I0915 07:01:44.830281   26835 main.go:141] libmachine: (ha-670527) Calling .GetConfigRaw
	I0915 07:01:44.830895   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:44.831117   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:44.831314   26835 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0915 07:01:44.831329   26835 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:01:44.832678   26835 main.go:141] libmachine: Detecting operating system of created instance...
	I0915 07:01:44.832699   26835 main.go:141] libmachine: Waiting for SSH to be available...
	I0915 07:01:44.832708   26835 main.go:141] libmachine: Getting to WaitForSSH function...
	I0915 07:01:44.832718   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:44.835692   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.836060   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:44.836098   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.836208   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:44.836378   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:44.836526   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:44.836642   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:44.836772   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:01:44.836986   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:01:44.836997   26835 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0915 07:01:44.945058   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:01:44.945078   26835 main.go:141] libmachine: Detecting the provisioner...
	I0915 07:01:44.945085   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:44.947793   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.948119   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:44.948141   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.948290   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:44.948484   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:44.948613   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:44.948721   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:44.948831   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:01:44.948990   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:01:44.949001   26835 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0915 07:01:45.058744   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0915 07:01:45.058841   26835 main.go:141] libmachine: found compatible host: buildroot
	I0915 07:01:45.058850   26835 main.go:141] libmachine: Provisioning with buildroot...
	I0915 07:01:45.058857   26835 main.go:141] libmachine: (ha-670527) Calling .GetMachineName
	I0915 07:01:45.059094   26835 buildroot.go:166] provisioning hostname "ha-670527"
	I0915 07:01:45.059126   26835 main.go:141] libmachine: (ha-670527) Calling .GetMachineName
	I0915 07:01:45.059297   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:45.061876   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.062229   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.062258   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.062348   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:45.062511   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.062614   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.062786   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:45.062927   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:01:45.063089   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:01:45.063100   26835 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-670527 && echo "ha-670527" | sudo tee /etc/hostname
	I0915 07:01:45.183848   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-670527
	
	I0915 07:01:45.183886   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:45.186544   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.186873   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.186896   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.187091   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:45.187253   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.187406   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.187536   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:45.187697   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:01:45.187915   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:01:45.187935   26835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-670527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-670527/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-670527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:01:45.302480   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:01:45.302530   26835 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 07:01:45.302576   26835 buildroot.go:174] setting up certificates
	I0915 07:01:45.302592   26835 provision.go:84] configureAuth start
	I0915 07:01:45.302605   26835 main.go:141] libmachine: (ha-670527) Calling .GetMachineName
	I0915 07:01:45.302895   26835 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:01:45.305295   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.305594   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.305617   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.305748   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:45.307612   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.307902   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.307932   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.308072   26835 provision.go:143] copyHostCerts
	I0915 07:01:45.308104   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:01:45.308140   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 07:01:45.308148   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:01:45.308215   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 07:01:45.308307   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:01:45.308325   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 07:01:45.308332   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:01:45.308379   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 07:01:45.308484   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:01:45.308508   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 07:01:45.308517   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:01:45.308552   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 07:01:45.308627   26835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.ha-670527 san=[127.0.0.1 192.168.39.54 ha-670527 localhost minikube]
	I0915 07:01:45.491639   26835 provision.go:177] copyRemoteCerts
	I0915 07:01:45.491698   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:01:45.491720   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:45.494361   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.494658   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.494685   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.494797   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:45.495000   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.495146   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:45.495278   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:01:45.579874   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0915 07:01:45.579950   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:01:45.604185   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0915 07:01:45.604260   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0915 07:01:45.628022   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0915 07:01:45.628090   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 07:01:45.651817   26835 provision.go:87] duration metric: took 349.209152ms to configureAuth
	I0915 07:01:45.651847   26835 buildroot.go:189] setting minikube options for container-runtime
	I0915 07:01:45.652034   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:01:45.652159   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:45.655043   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.655378   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.655405   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.655617   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:45.655762   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.655915   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.656063   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:45.656217   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:01:45.656384   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:01:45.656399   26835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:01:45.887505   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:01:45.887532   26835 main.go:141] libmachine: Checking connection to Docker...
	I0915 07:01:45.887542   26835 main.go:141] libmachine: (ha-670527) Calling .GetURL
	I0915 07:01:45.888872   26835 main.go:141] libmachine: (ha-670527) DBG | Using libvirt version 6000000
	I0915 07:01:45.891428   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.891766   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.891793   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.891941   26835 main.go:141] libmachine: Docker is up and running!
	I0915 07:01:45.891966   26835 main.go:141] libmachine: Reticulating splines...
	I0915 07:01:45.891976   26835 client.go:171] duration metric: took 23.467602141s to LocalClient.Create
	I0915 07:01:45.891999   26835 start.go:167] duration metric: took 23.467666954s to libmachine.API.Create "ha-670527"
	I0915 07:01:45.892007   26835 start.go:293] postStartSetup for "ha-670527" (driver="kvm2")
	I0915 07:01:45.892016   26835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:01:45.892032   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:45.892235   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:01:45.892256   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:45.894291   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.894576   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.894599   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.894739   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:45.894920   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.895026   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:45.895125   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:01:45.980573   26835 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:01:45.985151   26835 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 07:01:45.985189   26835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 07:01:45.985246   26835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 07:01:45.985325   26835 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 07:01:45.985335   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /etc/ssl/certs/131902.pem
	I0915 07:01:45.985421   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:01:45.995087   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:01:46.019331   26835 start.go:296] duration metric: took 127.309643ms for postStartSetup
	I0915 07:01:46.019392   26835 main.go:141] libmachine: (ha-670527) Calling .GetConfigRaw
	I0915 07:01:46.019946   26835 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:01:46.022538   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.022832   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:46.022860   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.023068   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:01:46.023248   26835 start.go:128] duration metric: took 23.616801339s to createHost
	I0915 07:01:46.023275   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:46.025196   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.025484   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:46.025508   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.025641   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:46.025840   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:46.025978   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:46.026139   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:46.026267   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:01:46.026478   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:01:46.026498   26835 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 07:01:46.134541   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726383706.112000470
	
	I0915 07:01:46.134569   26835 fix.go:216] guest clock: 1726383706.112000470
	I0915 07:01:46.134580   26835 fix.go:229] Guest: 2024-09-15 07:01:46.11200047 +0000 UTC Remote: 2024-09-15 07:01:46.023265524 +0000 UTC m=+23.718631124 (delta=88.734946ms)
	I0915 07:01:46.134604   26835 fix.go:200] guest clock delta is within tolerance: 88.734946ms
	I0915 07:01:46.134609   26835 start.go:83] releasing machines lock for "ha-670527", held for 23.728244309s
	I0915 07:01:46.134635   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:46.134884   26835 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:01:46.137240   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.137654   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:46.137678   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.137879   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:46.138482   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:46.138646   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:46.138754   26835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:01:46.138801   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:46.138868   26835 ssh_runner.go:195] Run: cat /version.json
	I0915 07:01:46.138890   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:46.141285   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.141474   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.141599   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:46.141626   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.141742   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:46.141837   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:46.141864   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.141923   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:46.141984   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:46.142063   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:46.142179   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:46.142177   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:01:46.142326   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:46.142467   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:01:46.218952   26835 ssh_runner.go:195] Run: systemctl --version
	I0915 07:01:46.246916   26835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:01:46.409291   26835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 07:01:46.415192   26835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 07:01:46.415272   26835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:01:46.432003   26835 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 07:01:46.432030   26835 start.go:495] detecting cgroup driver to use...
	I0915 07:01:46.432101   26835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:01:46.448723   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:01:46.462830   26835 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:01:46.462893   26835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:01:46.476505   26835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:01:46.490557   26835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:01:46.602542   26835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:01:46.739310   26835 docker.go:233] disabling docker service ...
	I0915 07:01:46.739370   26835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:01:46.753843   26835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:01:46.766903   26835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:01:46.898044   26835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:01:47.030704   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:01:47.050656   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:01:47.071949   26835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:01:47.072007   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:01:47.082479   26835 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:01:47.082549   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:01:47.092957   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:01:47.103025   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:01:47.113313   26835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:01:47.123742   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:01:47.134057   26835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:01:47.151062   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:01:47.161590   26835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:01:47.170543   26835 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0915 07:01:47.170591   26835 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0915 07:01:47.182597   26835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:01:47.192398   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:01:47.324081   26835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:01:47.420878   26835 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:01:47.420959   26835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:01:47.425497   26835 start.go:563] Will wait 60s for crictl version
	I0915 07:01:47.425545   26835 ssh_runner.go:195] Run: which crictl
	I0915 07:01:47.429322   26835 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:01:47.467220   26835 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 07:01:47.467299   26835 ssh_runner.go:195] Run: crio --version
	I0915 07:01:47.495752   26835 ssh_runner.go:195] Run: crio --version
	I0915 07:01:47.524642   26835 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 07:01:47.525898   26835 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:01:47.528463   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:47.528841   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:47.528868   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:47.529092   26835 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0915 07:01:47.533285   26835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:01:47.546197   26835 kubeadm.go:883] updating cluster {Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 07:01:47.546295   26835 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:01:47.546333   26835 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:01:47.576923   26835 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0915 07:01:47.576988   26835 ssh_runner.go:195] Run: which lz4
	I0915 07:01:47.580864   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0915 07:01:47.580971   26835 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0915 07:01:47.585030   26835 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0915 07:01:47.585056   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0915 07:01:48.943403   26835 crio.go:462] duration metric: took 1.362463597s to copy over tarball
	I0915 07:01:48.943469   26835 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0915 07:01:50.907239   26835 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.963740546s)
	I0915 07:01:50.907276   26835 crio.go:469] duration metric: took 1.963847523s to extract the tarball
	I0915 07:01:50.907286   26835 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0915 07:01:50.944640   26835 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:01:50.989423   26835 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 07:01:50.989449   26835 cache_images.go:84] Images are preloaded, skipping loading
	I0915 07:01:50.989457   26835 kubeadm.go:934] updating node { 192.168.39.54 8443 v1.31.1 crio true true} ...
	I0915 07:01:50.989586   26835 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-670527 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:01:50.989679   26835 ssh_runner.go:195] Run: crio config
	I0915 07:01:51.036536   26835 cni.go:84] Creating CNI manager for ""
	I0915 07:01:51.036560   26835 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0915 07:01:51.036576   26835 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 07:01:51.036605   26835 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.54 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-670527 NodeName:ha-670527 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 07:01:51.036776   26835 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-670527"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 07:01:51.036805   26835 kube-vip.go:115] generating kube-vip config ...
	I0915 07:01:51.036850   26835 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0915 07:01:51.052905   26835 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0915 07:01:51.053025   26835 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0915 07:01:51.053089   26835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 07:01:51.063199   26835 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 07:01:51.063273   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0915 07:01:51.072823   26835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0915 07:01:51.088848   26835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:01:51.104303   26835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0915 07:01:51.120544   26835 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0915 07:01:51.136574   26835 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0915 07:01:51.140343   26835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:01:51.152528   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:01:51.265401   26835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:01:51.281724   26835 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527 for IP: 192.168.39.54
	I0915 07:01:51.281749   26835 certs.go:194] generating shared ca certs ...
	I0915 07:01:51.281769   26835 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:51.281940   26835 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 07:01:51.281983   26835 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 07:01:51.281995   26835 certs.go:256] generating profile certs ...
	I0915 07:01:51.282050   26835 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key
	I0915 07:01:51.282070   26835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.crt with IP's: []
	I0915 07:01:51.401304   26835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.crt ...
	I0915 07:01:51.401332   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.crt: {Name:mka5690a76d05395db0946261ac3997a291081b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:51.401517   26835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key ...
	I0915 07:01:51.401538   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key: {Name:mkd1b6294a065842e208ffc8dee320a135e903bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:51.401642   26835 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.d6a173c9
	I0915 07:01:51.401662   26835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.d6a173c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.254]
	I0915 07:01:51.497958   26835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.d6a173c9 ...
	I0915 07:01:51.497984   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.d6a173c9: {Name:mkb63f9e00b6807ec3effb048bb09c3cb258c80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:51.498180   26835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.d6a173c9 ...
	I0915 07:01:51.498198   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.d6a173c9: {Name:mk1c9961994945d680cbfecfc61b9b26bd523332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:51.498333   26835 certs.go:381] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.d6a173c9 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt
	I0915 07:01:51.498424   26835 certs.go:385] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.d6a173c9 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key
	I0915 07:01:51.498479   26835 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key
	I0915 07:01:51.498495   26835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt with IP's: []
	I0915 07:01:51.619316   26835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt ...
	I0915 07:01:51.619354   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt: {Name:mk8e8b1dc5f4806199580985192f13865ad9631a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:51.619537   26835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key ...
	I0915 07:01:51.619550   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key: {Name:mk692b18ee8d7ed5ffa7b264e65e02a13aab4bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:51.619647   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0915 07:01:51.619668   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0915 07:01:51.619679   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0915 07:01:51.619694   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0915 07:01:51.619707   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0915 07:01:51.619720   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0915 07:01:51.619732   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0915 07:01:51.619745   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0915 07:01:51.619799   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 07:01:51.619841   26835 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 07:01:51.619851   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:01:51.619871   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:01:51.619914   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:01:51.619946   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 07:01:51.619988   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:01:51.620021   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /usr/share/ca-certificates/131902.pem
	I0915 07:01:51.620035   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:01:51.620045   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem -> /usr/share/ca-certificates/13190.pem
	I0915 07:01:51.620592   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:01:51.646496   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:01:51.670850   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:01:51.697503   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 07:01:51.723715   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0915 07:01:51.749604   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 07:01:51.775182   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 07:01:51.798076   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 07:01:51.820746   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 07:01:51.845308   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:01:51.871272   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 07:01:51.897269   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 07:01:51.916589   26835 ssh_runner.go:195] Run: openssl version
	I0915 07:01:51.922558   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 07:01:51.934005   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 07:01:51.938769   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 07:01:51.938820   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 07:01:51.944885   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 07:01:51.957963   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 07:01:51.969571   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 07:01:51.974259   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 07:01:51.974315   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 07:01:51.979983   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:01:51.991261   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:01:52.003094   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:01:52.007600   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:01:52.007659   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:01:52.013212   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:01:52.024389   26835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:01:52.028284   26835 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 07:01:52.028335   26835 kubeadm.go:392] StartCluster: {Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:01:52.028395   26835 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 07:01:52.028458   26835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 07:01:52.068839   26835 cri.go:89] found id: ""
	I0915 07:01:52.068901   26835 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 07:01:52.081684   26835 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 07:01:52.093797   26835 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 07:01:52.109244   26835 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 07:01:52.109265   26835 kubeadm.go:157] found existing configuration files:
	
	I0915 07:01:52.109309   26835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 07:01:52.119100   26835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 07:01:52.119162   26835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 07:01:52.129010   26835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 07:01:52.138382   26835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 07:01:52.138443   26835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 07:01:52.147811   26835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 07:01:52.156879   26835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 07:01:52.156922   26835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 07:01:52.166267   26835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 07:01:52.175241   26835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 07:01:52.175287   26835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 07:01:52.184586   26835 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 07:01:52.293898   26835 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 07:01:52.294087   26835 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 07:01:52.391078   26835 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 07:01:52.391223   26835 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 07:01:52.391362   26835 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 07:01:52.401134   26835 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 07:01:52.403461   26835 out.go:235]   - Generating certificates and keys ...
	I0915 07:01:52.404736   26835 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 07:01:52.404828   26835 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 07:01:52.769208   26835 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 07:01:52.890893   26835 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 07:01:53.106013   26835 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 07:01:53.212284   26835 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 07:01:53.427702   26835 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 07:01:53.427959   26835 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-670527 localhost] and IPs [192.168.39.54 127.0.0.1 ::1]
	I0915 07:01:53.492094   26835 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 07:01:53.492266   26835 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-670527 localhost] and IPs [192.168.39.54 127.0.0.1 ::1]
	I0915 07:01:53.648978   26835 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 07:01:53.712245   26835 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 07:01:53.783010   26835 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 07:01:53.783253   26835 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 07:01:54.269687   26835 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 07:01:54.413559   26835 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 07:01:54.606535   26835 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 07:01:54.768289   26835 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 07:01:54.881907   26835 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 07:01:54.882517   26835 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 07:01:54.885516   26835 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 07:01:54.887699   26835 out.go:235]   - Booting up control plane ...
	I0915 07:01:54.887822   26835 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 07:01:54.887927   26835 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 07:01:54.888028   26835 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 07:01:54.904806   26835 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 07:01:54.910929   26835 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 07:01:54.910987   26835 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 07:01:55.044359   26835 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 07:01:55.044554   26835 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 07:01:56.048040   26835 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002197719s
	I0915 07:01:56.048186   26835 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 07:02:01.698547   26835 kubeadm.go:310] [api-check] The API server is healthy after 5.653915359s
	I0915 07:02:01.712760   26835 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 07:02:01.726582   26835 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 07:02:01.762909   26835 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 07:02:01.763158   26835 kubeadm.go:310] [mark-control-plane] Marking the node ha-670527 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 07:02:01.775624   26835 kubeadm.go:310] [bootstrap-token] Using token: qqoe14.538zsiy1hqi1fmmp
	I0915 07:02:01.777066   26835 out.go:235]   - Configuring RBAC rules ...
	I0915 07:02:01.777189   26835 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 07:02:01.785613   26835 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 07:02:01.796346   26835 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 07:02:01.799830   26835 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 07:02:01.803486   26835 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 07:02:01.809344   26835 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 07:02:02.106575   26835 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 07:02:02.529953   26835 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 07:02:03.103824   26835 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 07:02:03.103851   26835 kubeadm.go:310] 
	I0915 07:02:03.103902   26835 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 07:02:03.103911   26835 kubeadm.go:310] 
	I0915 07:02:03.104041   26835 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 07:02:03.104065   26835 kubeadm.go:310] 
	I0915 07:02:03.104109   26835 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 07:02:03.104191   26835 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 07:02:03.104258   26835 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 07:02:03.104267   26835 kubeadm.go:310] 
	I0915 07:02:03.104340   26835 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 07:02:03.104350   26835 kubeadm.go:310] 
	I0915 07:02:03.104425   26835 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 07:02:03.104434   26835 kubeadm.go:310] 
	I0915 07:02:03.104501   26835 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 07:02:03.104598   26835 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 07:02:03.104705   26835 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 07:02:03.104720   26835 kubeadm.go:310] 
	I0915 07:02:03.104819   26835 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 07:02:03.104934   26835 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 07:02:03.104948   26835 kubeadm.go:310] 
	I0915 07:02:03.105060   26835 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qqoe14.538zsiy1hqi1fmmp \
	I0915 07:02:03.105198   26835 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b \
	I0915 07:02:03.105239   26835 kubeadm.go:310] 	--control-plane 
	I0915 07:02:03.105253   26835 kubeadm.go:310] 
	I0915 07:02:03.105385   26835 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 07:02:03.105396   26835 kubeadm.go:310] 
	I0915 07:02:03.105511   26835 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qqoe14.538zsiy1hqi1fmmp \
	I0915 07:02:03.105650   26835 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b 
	I0915 07:02:03.106268   26835 kubeadm.go:310] W0915 07:01:52.272705     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 07:02:03.106674   26835 kubeadm.go:310] W0915 07:01:52.276103     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 07:02:03.106810   26835 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 07:02:03.106836   26835 cni.go:84] Creating CNI manager for ""
	I0915 07:02:03.106847   26835 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0915 07:02:03.108556   26835 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0915 07:02:03.110033   26835 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0915 07:02:03.115651   26835 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0915 07:02:03.115666   26835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0915 07:02:03.135130   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0915 07:02:03.516090   26835 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 07:02:03.516167   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:03.516174   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-670527 minikube.k8s.io/updated_at=2024_09_15T07_02_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=ha-670527 minikube.k8s.io/primary=true
	I0915 07:02:03.720688   26835 ops.go:34] apiserver oom_adj: -16
	I0915 07:02:03.720852   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:04.220992   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:04.721026   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:05.221544   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:05.721967   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:06.221944   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:06.721918   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:06.825074   26835 kubeadm.go:1113] duration metric: took 3.30897778s to wait for elevateKubeSystemPrivileges
	I0915 07:02:06.825124   26835 kubeadm.go:394] duration metric: took 14.796790647s to StartCluster
	I0915 07:02:06.825151   26835 settings.go:142] acquiring lock: {Name:mkf5235d72fa0db4ee272126c244284fe5de298a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:02:06.825248   26835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:02:06.826001   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/kubeconfig: {Name:mk08ce8dc701ab8f3c73b1f6ae730da0fbf561bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:02:06.826205   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 07:02:06.826222   26835 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0915 07:02:06.826203   26835 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:02:06.826275   26835 addons.go:69] Setting storage-provisioner=true in profile "ha-670527"
	I0915 07:02:06.826288   26835 addons.go:234] Setting addon storage-provisioner=true in "ha-670527"
	I0915 07:02:06.826289   26835 addons.go:69] Setting default-storageclass=true in profile "ha-670527"
	I0915 07:02:06.826312   26835 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-670527"
	I0915 07:02:06.826319   26835 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:02:06.826277   26835 start.go:241] waiting for startup goroutines ...
	I0915 07:02:06.826431   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:02:06.826661   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:06.826699   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:06.826761   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:06.826798   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:06.841457   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45393
	I0915 07:02:06.841556   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I0915 07:02:06.841985   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:06.842009   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:06.842500   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:06.842506   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:06.842517   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:06.842520   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:06.842859   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:06.842871   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:06.843077   26835 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:02:06.843364   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:06.843395   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:06.845283   26835 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:02:06.845642   26835 kapi.go:59] client config for ha-670527: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.crt", KeyFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key", CAFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 07:02:06.846317   26835 cert_rotation.go:140] Starting client certificate rotation controller
	I0915 07:02:06.846760   26835 addons.go:234] Setting addon default-storageclass=true in "ha-670527"
	I0915 07:02:06.846802   26835 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:02:06.847165   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:06.847203   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:06.858378   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0915 07:02:06.858780   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:06.859194   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:06.859214   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:06.859571   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:06.859751   26835 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:02:06.861452   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:02:06.861502   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0915 07:02:06.861922   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:06.862339   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:06.862361   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:06.862757   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:06.863263   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:06.863348   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:06.863393   26835 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 07:02:06.864862   26835 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 07:02:06.864883   26835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 07:02:06.864900   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:02:06.867717   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:06.868106   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:02:06.868131   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:06.868252   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:02:06.868413   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:02:06.868595   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:02:06.868701   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:02:06.878680   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0915 07:02:06.879120   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:06.879562   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:06.879592   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:06.879969   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:06.880136   26835 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:02:06.881611   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:02:06.881843   26835 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 07:02:06.881859   26835 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 07:02:06.881877   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:02:06.884389   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:06.884768   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:02:06.884793   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:06.884948   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:02:06.885116   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:02:06.885279   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:02:06.885395   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:02:06.935561   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 07:02:07.037232   26835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 07:02:07.056443   26835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 07:02:07.536670   26835 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0915 07:02:07.917700   26835 main.go:141] libmachine: Making call to close driver server
	I0915 07:02:07.917729   26835 main.go:141] libmachine: (ha-670527) Calling .Close
	I0915 07:02:07.917717   26835 main.go:141] libmachine: Making call to close driver server
	I0915 07:02:07.917799   26835 main.go:141] libmachine: (ha-670527) Calling .Close
	I0915 07:02:07.918090   26835 main.go:141] libmachine: (ha-670527) DBG | Closing plugin on server side
	I0915 07:02:07.918095   26835 main.go:141] libmachine: (ha-670527) DBG | Closing plugin on server side
	I0915 07:02:07.918117   26835 main.go:141] libmachine: Successfully made call to close driver server
	I0915 07:02:07.918182   26835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 07:02:07.918192   26835 main.go:141] libmachine: Making call to close driver server
	I0915 07:02:07.918208   26835 main.go:141] libmachine: (ha-670527) Calling .Close
	I0915 07:02:07.918228   26835 main.go:141] libmachine: Successfully made call to close driver server
	I0915 07:02:07.918260   26835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 07:02:07.918273   26835 main.go:141] libmachine: Making call to close driver server
	I0915 07:02:07.918292   26835 main.go:141] libmachine: (ha-670527) Calling .Close
	I0915 07:02:07.918444   26835 main.go:141] libmachine: Successfully made call to close driver server
	I0915 07:02:07.918471   26835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 07:02:07.918501   26835 main.go:141] libmachine: (ha-670527) DBG | Closing plugin on server side
	I0915 07:02:07.918519   26835 main.go:141] libmachine: Successfully made call to close driver server
	I0915 07:02:07.918531   26835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 07:02:07.918607   26835 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0915 07:02:07.918627   26835 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0915 07:02:07.918743   26835 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0915 07:02:07.918753   26835 round_trippers.go:469] Request Headers:
	I0915 07:02:07.918765   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:02:07.918773   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:02:07.932551   26835 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0915 07:02:07.933454   26835 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0915 07:02:07.933473   26835 round_trippers.go:469] Request Headers:
	I0915 07:02:07.933489   26835 round_trippers.go:473]     Content-Type: application/json
	I0915 07:02:07.933496   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:02:07.933500   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:02:07.937740   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:02:07.937929   26835 main.go:141] libmachine: Making call to close driver server
	I0915 07:02:07.937943   26835 main.go:141] libmachine: (ha-670527) Calling .Close
	I0915 07:02:07.938292   26835 main.go:141] libmachine: Successfully made call to close driver server
	I0915 07:02:07.938328   26835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 07:02:07.938329   26835 main.go:141] libmachine: (ha-670527) DBG | Closing plugin on server side
	I0915 07:02:07.940177   26835 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0915 07:02:07.941522   26835 addons.go:510] duration metric: took 1.115301832s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0915 07:02:07.941556   26835 start.go:246] waiting for cluster config update ...
	I0915 07:02:07.941569   26835 start.go:255] writing updated cluster config ...
	I0915 07:02:07.943080   26835 out.go:201] 
	I0915 07:02:07.944459   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:02:07.944560   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:02:07.946065   26835 out.go:177] * Starting "ha-670527-m02" control-plane node in "ha-670527" cluster
	I0915 07:02:07.947264   26835 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:02:07.947289   26835 cache.go:56] Caching tarball of preloaded images
	I0915 07:02:07.947402   26835 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:02:07.947416   26835 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:02:07.947521   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:02:07.947727   26835 start.go:360] acquireMachinesLock for ha-670527-m02: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:02:07.947782   26835 start.go:364] duration metric: took 32.742µs to acquireMachinesLock for "ha-670527-m02"
	I0915 07:02:07.947804   26835 start.go:93] Provisioning new machine with config: &{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:02:07.947898   26835 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0915 07:02:07.949571   26835 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 07:02:07.949670   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:07.949710   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:07.964465   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41537
	I0915 07:02:07.964840   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:07.965294   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:07.965315   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:07.965702   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:07.965905   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetMachineName
	I0915 07:02:07.966037   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:07.966246   26835 start.go:159] libmachine.API.Create for "ha-670527" (driver="kvm2")
	I0915 07:02:07.966278   26835 client.go:168] LocalClient.Create starting
	I0915 07:02:07.966313   26835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem
	I0915 07:02:07.966359   26835 main.go:141] libmachine: Decoding PEM data...
	I0915 07:02:07.966386   26835 main.go:141] libmachine: Parsing certificate...
	I0915 07:02:07.966455   26835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem
	I0915 07:02:07.966483   26835 main.go:141] libmachine: Decoding PEM data...
	I0915 07:02:07.966500   26835 main.go:141] libmachine: Parsing certificate...
	I0915 07:02:07.966525   26835 main.go:141] libmachine: Running pre-create checks...
	I0915 07:02:07.966537   26835 main.go:141] libmachine: (ha-670527-m02) Calling .PreCreateCheck
	I0915 07:02:07.966712   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetConfigRaw
	I0915 07:02:07.967130   26835 main.go:141] libmachine: Creating machine...
	I0915 07:02:07.967148   26835 main.go:141] libmachine: (ha-670527-m02) Calling .Create
	I0915 07:02:07.967289   26835 main.go:141] libmachine: (ha-670527-m02) Creating KVM machine...
	I0915 07:02:07.968555   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found existing default KVM network
	I0915 07:02:07.968645   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found existing private KVM network mk-ha-670527
	I0915 07:02:07.968783   26835 main.go:141] libmachine: (ha-670527-m02) Setting up store path in /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02 ...
	I0915 07:02:07.968815   26835 main.go:141] libmachine: (ha-670527-m02) Building disk image from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso
	I0915 07:02:07.968846   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:07.968755   27180 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:02:07.968929   26835 main.go:141] libmachine: (ha-670527-m02) Downloading /home/jenkins/minikube-integration/19644-6166/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso...
	I0915 07:02:08.201572   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:08.201469   27180 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa...
	I0915 07:02:08.335695   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:08.335566   27180 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/ha-670527-m02.rawdisk...
	I0915 07:02:08.335731   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Writing magic tar header
	I0915 07:02:08.335746   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Writing SSH key tar header
	I0915 07:02:08.335765   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:08.335695   27180 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02 ...
	I0915 07:02:08.335880   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02
	I0915 07:02:08.335936   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines
	I0915 07:02:08.335951   26835 main.go:141] libmachine: (ha-670527-m02) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02 (perms=drwx------)
	I0915 07:02:08.335967   26835 main.go:141] libmachine: (ha-670527-m02) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines (perms=drwxr-xr-x)
	I0915 07:02:08.335982   26835 main.go:141] libmachine: (ha-670527-m02) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube (perms=drwxr-xr-x)
	I0915 07:02:08.335995   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:02:08.336008   26835 main.go:141] libmachine: (ha-670527-m02) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166 (perms=drwxrwxr-x)
	I0915 07:02:08.336020   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166
	I0915 07:02:08.336042   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0915 07:02:08.336054   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Checking permissions on dir: /home/jenkins
	I0915 07:02:08.336063   26835 main.go:141] libmachine: (ha-670527-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0915 07:02:08.336076   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Checking permissions on dir: /home
	I0915 07:02:08.336092   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Skipping /home - not owner
	I0915 07:02:08.336103   26835 main.go:141] libmachine: (ha-670527-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0915 07:02:08.336125   26835 main.go:141] libmachine: (ha-670527-m02) Creating domain...
	I0915 07:02:08.337089   26835 main.go:141] libmachine: (ha-670527-m02) define libvirt domain using xml: 
	I0915 07:02:08.337108   26835 main.go:141] libmachine: (ha-670527-m02) <domain type='kvm'>
	I0915 07:02:08.337118   26835 main.go:141] libmachine: (ha-670527-m02)   <name>ha-670527-m02</name>
	I0915 07:02:08.337131   26835 main.go:141] libmachine: (ha-670527-m02)   <memory unit='MiB'>2200</memory>
	I0915 07:02:08.337143   26835 main.go:141] libmachine: (ha-670527-m02)   <vcpu>2</vcpu>
	I0915 07:02:08.337154   26835 main.go:141] libmachine: (ha-670527-m02)   <features>
	I0915 07:02:08.337163   26835 main.go:141] libmachine: (ha-670527-m02)     <acpi/>
	I0915 07:02:08.337172   26835 main.go:141] libmachine: (ha-670527-m02)     <apic/>
	I0915 07:02:08.337181   26835 main.go:141] libmachine: (ha-670527-m02)     <pae/>
	I0915 07:02:08.337188   26835 main.go:141] libmachine: (ha-670527-m02)     
	I0915 07:02:08.337199   26835 main.go:141] libmachine: (ha-670527-m02)   </features>
	I0915 07:02:08.337217   26835 main.go:141] libmachine: (ha-670527-m02)   <cpu mode='host-passthrough'>
	I0915 07:02:08.337227   26835 main.go:141] libmachine: (ha-670527-m02)   
	I0915 07:02:08.337234   26835 main.go:141] libmachine: (ha-670527-m02)   </cpu>
	I0915 07:02:08.337245   26835 main.go:141] libmachine: (ha-670527-m02)   <os>
	I0915 07:02:08.337256   26835 main.go:141] libmachine: (ha-670527-m02)     <type>hvm</type>
	I0915 07:02:08.337265   26835 main.go:141] libmachine: (ha-670527-m02)     <boot dev='cdrom'/>
	I0915 07:02:08.337275   26835 main.go:141] libmachine: (ha-670527-m02)     <boot dev='hd'/>
	I0915 07:02:08.337286   26835 main.go:141] libmachine: (ha-670527-m02)     <bootmenu enable='no'/>
	I0915 07:02:08.337311   26835 main.go:141] libmachine: (ha-670527-m02)   </os>
	I0915 07:02:08.337327   26835 main.go:141] libmachine: (ha-670527-m02)   <devices>
	I0915 07:02:08.337337   26835 main.go:141] libmachine: (ha-670527-m02)     <disk type='file' device='cdrom'>
	I0915 07:02:08.337348   26835 main.go:141] libmachine: (ha-670527-m02)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/boot2docker.iso'/>
	I0915 07:02:08.337356   26835 main.go:141] libmachine: (ha-670527-m02)       <target dev='hdc' bus='scsi'/>
	I0915 07:02:08.337362   26835 main.go:141] libmachine: (ha-670527-m02)       <readonly/>
	I0915 07:02:08.337369   26835 main.go:141] libmachine: (ha-670527-m02)     </disk>
	I0915 07:02:08.337376   26835 main.go:141] libmachine: (ha-670527-m02)     <disk type='file' device='disk'>
	I0915 07:02:08.337386   26835 main.go:141] libmachine: (ha-670527-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0915 07:02:08.337396   26835 main.go:141] libmachine: (ha-670527-m02)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/ha-670527-m02.rawdisk'/>
	I0915 07:02:08.337416   26835 main.go:141] libmachine: (ha-670527-m02)       <target dev='hda' bus='virtio'/>
	I0915 07:02:08.337432   26835 main.go:141] libmachine: (ha-670527-m02)     </disk>
	I0915 07:02:08.337442   26835 main.go:141] libmachine: (ha-670527-m02)     <interface type='network'>
	I0915 07:02:08.337453   26835 main.go:141] libmachine: (ha-670527-m02)       <source network='mk-ha-670527'/>
	I0915 07:02:08.337461   26835 main.go:141] libmachine: (ha-670527-m02)       <model type='virtio'/>
	I0915 07:02:08.337468   26835 main.go:141] libmachine: (ha-670527-m02)     </interface>
	I0915 07:02:08.337480   26835 main.go:141] libmachine: (ha-670527-m02)     <interface type='network'>
	I0915 07:02:08.337495   26835 main.go:141] libmachine: (ha-670527-m02)       <source network='default'/>
	I0915 07:02:08.337507   26835 main.go:141] libmachine: (ha-670527-m02)       <model type='virtio'/>
	I0915 07:02:08.337515   26835 main.go:141] libmachine: (ha-670527-m02)     </interface>
	I0915 07:02:08.337524   26835 main.go:141] libmachine: (ha-670527-m02)     <serial type='pty'>
	I0915 07:02:08.337531   26835 main.go:141] libmachine: (ha-670527-m02)       <target port='0'/>
	I0915 07:02:08.337543   26835 main.go:141] libmachine: (ha-670527-m02)     </serial>
	I0915 07:02:08.337551   26835 main.go:141] libmachine: (ha-670527-m02)     <console type='pty'>
	I0915 07:02:08.337560   26835 main.go:141] libmachine: (ha-670527-m02)       <target type='serial' port='0'/>
	I0915 07:02:08.337574   26835 main.go:141] libmachine: (ha-670527-m02)     </console>
	I0915 07:02:08.337585   26835 main.go:141] libmachine: (ha-670527-m02)     <rng model='virtio'>
	I0915 07:02:08.337594   26835 main.go:141] libmachine: (ha-670527-m02)       <backend model='random'>/dev/random</backend>
	I0915 07:02:08.337606   26835 main.go:141] libmachine: (ha-670527-m02)     </rng>
	I0915 07:02:08.337622   26835 main.go:141] libmachine: (ha-670527-m02)     
	I0915 07:02:08.337634   26835 main.go:141] libmachine: (ha-670527-m02)     
	I0915 07:02:08.337641   26835 main.go:141] libmachine: (ha-670527-m02)   </devices>
	I0915 07:02:08.337671   26835 main.go:141] libmachine: (ha-670527-m02) </domain>
	I0915 07:02:08.337702   26835 main.go:141] libmachine: (ha-670527-m02) 
	I0915 07:02:08.344146   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:35:4a:1c in network default
	I0915 07:02:08.344712   26835 main.go:141] libmachine: (ha-670527-m02) Ensuring networks are active...
	I0915 07:02:08.344730   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:08.345453   26835 main.go:141] libmachine: (ha-670527-m02) Ensuring network default is active
	I0915 07:02:08.345766   26835 main.go:141] libmachine: (ha-670527-m02) Ensuring network mk-ha-670527 is active
	I0915 07:02:08.346254   26835 main.go:141] libmachine: (ha-670527-m02) Getting domain xml...
	I0915 07:02:08.347074   26835 main.go:141] libmachine: (ha-670527-m02) Creating domain...
	I0915 07:02:09.543665   26835 main.go:141] libmachine: (ha-670527-m02) Waiting to get IP...
	I0915 07:02:09.544332   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:09.544734   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:09.544777   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:09.544722   27180 retry.go:31] will retry after 223.468124ms: waiting for machine to come up
	I0915 07:02:09.770366   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:09.770773   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:09.770797   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:09.770732   27180 retry.go:31] will retry after 238.513621ms: waiting for machine to come up
	I0915 07:02:10.011141   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:10.011607   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:10.011630   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:10.011583   27180 retry.go:31] will retry after 331.854292ms: waiting for machine to come up
	I0915 07:02:10.345142   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:10.345563   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:10.345587   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:10.345512   27180 retry.go:31] will retry after 603.907795ms: waiting for machine to come up
	I0915 07:02:10.951205   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:10.951571   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:10.951597   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:10.951535   27180 retry.go:31] will retry after 682.284876ms: waiting for machine to come up
	I0915 07:02:11.635334   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:11.635823   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:11.635847   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:11.635765   27180 retry.go:31] will retry after 624.967872ms: waiting for machine to come up
	I0915 07:02:12.261987   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:12.262355   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:12.262383   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:12.262328   27180 retry.go:31] will retry after 1.134334018s: waiting for machine to come up
	I0915 07:02:13.399207   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:13.399742   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:13.399771   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:13.399729   27180 retry.go:31] will retry after 1.375956263s: waiting for machine to come up
	I0915 07:02:14.777134   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:14.777563   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:14.777579   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:14.777513   27180 retry.go:31] will retry after 1.768180712s: waiting for machine to come up
	I0915 07:02:16.546805   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:16.547182   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:16.547224   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:16.547118   27180 retry.go:31] will retry after 1.716559811s: waiting for machine to come up
	I0915 07:02:18.265525   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:18.265902   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:18.265950   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:18.265878   27180 retry.go:31] will retry after 2.21601359s: waiting for machine to come up
	I0915 07:02:20.483051   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:20.483454   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:20.483506   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:20.483423   27180 retry.go:31] will retry after 3.099487423s: waiting for machine to come up
	I0915 07:02:23.584173   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:23.584557   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:23.584586   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:23.584508   27180 retry.go:31] will retry after 4.098648524s: waiting for machine to come up
	I0915 07:02:27.684343   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.684832   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has current primary IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.684858   26835 main.go:141] libmachine: (ha-670527-m02) Found IP for machine: 192.168.39.222
	I0915 07:02:27.684883   26835 main.go:141] libmachine: (ha-670527-m02) Reserving static IP address...
	I0915 07:02:27.685296   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find host DHCP lease matching {name: "ha-670527-m02", mac: "52:54:00:5d:e6:7b", ip: "192.168.39.222"} in network mk-ha-670527
	I0915 07:02:27.756355   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Getting to WaitForSSH function...
	I0915 07:02:27.756382   26835 main.go:141] libmachine: (ha-670527-m02) Reserved static IP address: 192.168.39.222
	I0915 07:02:27.756395   26835 main.go:141] libmachine: (ha-670527-m02) Waiting for SSH to be available...
	I0915 07:02:27.758799   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.759203   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:27.759239   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.759264   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Using SSH client type: external
	I0915 07:02:27.759280   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa (-rw-------)
	I0915 07:02:27.759360   26835 main.go:141] libmachine: (ha-670527-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 07:02:27.759381   26835 main.go:141] libmachine: (ha-670527-m02) DBG | About to run SSH command:
	I0915 07:02:27.759405   26835 main.go:141] libmachine: (ha-670527-m02) DBG | exit 0
	I0915 07:02:27.881993   26835 main.go:141] libmachine: (ha-670527-m02) DBG | SSH cmd err, output: <nil>: 
	I0915 07:02:27.882263   26835 main.go:141] libmachine: (ha-670527-m02) KVM machine creation complete!
	I0915 07:02:27.882572   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetConfigRaw
	I0915 07:02:27.883216   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:27.883392   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:27.883567   26835 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0915 07:02:27.883580   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetState
	I0915 07:02:27.884843   26835 main.go:141] libmachine: Detecting operating system of created instance...
	I0915 07:02:27.884854   26835 main.go:141] libmachine: Waiting for SSH to be available...
	I0915 07:02:27.884859   26835 main.go:141] libmachine: Getting to WaitForSSH function...
	I0915 07:02:27.884864   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:27.887269   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.887620   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:27.887645   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.887817   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:27.887994   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:27.888138   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:27.888271   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:27.888459   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:02:27.888737   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0915 07:02:27.888751   26835 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0915 07:02:27.985337   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:02:27.985360   26835 main.go:141] libmachine: Detecting the provisioner...
	I0915 07:02:27.985368   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:27.988310   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.988681   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:27.988710   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.988881   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:27.989093   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:27.989253   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:27.989382   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:27.989540   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:02:27.989706   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0915 07:02:27.989716   26835 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0915 07:02:28.086637   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0915 07:02:28.086737   26835 main.go:141] libmachine: found compatible host: buildroot
	I0915 07:02:28.086750   26835 main.go:141] libmachine: Provisioning with buildroot...
	I0915 07:02:28.086757   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetMachineName
	I0915 07:02:28.086989   26835 buildroot.go:166] provisioning hostname "ha-670527-m02"
	I0915 07:02:28.087009   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetMachineName
	I0915 07:02:28.087209   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:28.089734   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.090173   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.090192   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.090340   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:28.090536   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.090684   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.090836   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:28.090985   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:02:28.091140   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0915 07:02:28.091151   26835 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-670527-m02 && echo "ha-670527-m02" | sudo tee /etc/hostname
	I0915 07:02:28.204738   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-670527-m02
	
	I0915 07:02:28.204784   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:28.207639   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.207977   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.208016   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.208156   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:28.208320   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.208469   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.208591   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:28.208772   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:02:28.208959   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0915 07:02:28.208981   26835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-670527-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-670527-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-670527-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:02:28.314884   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:02:28.314912   26835 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 07:02:28.314931   26835 buildroot.go:174] setting up certificates
	I0915 07:02:28.314941   26835 provision.go:84] configureAuth start
	I0915 07:02:28.314952   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetMachineName
	I0915 07:02:28.315229   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:02:28.318150   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.318522   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.318550   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.318741   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:28.320813   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.321195   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.321222   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.321330   26835 provision.go:143] copyHostCerts
	I0915 07:02:28.321372   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:02:28.321420   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 07:02:28.321432   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:02:28.321512   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 07:02:28.321614   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:02:28.321642   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 07:02:28.321650   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:02:28.321691   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 07:02:28.321857   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:02:28.321909   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 07:02:28.321919   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:02:28.321978   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 07:02:28.322077   26835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.ha-670527-m02 san=[127.0.0.1 192.168.39.222 ha-670527-m02 localhost minikube]
	I0915 07:02:28.421330   26835 provision.go:177] copyRemoteCerts
	I0915 07:02:28.421383   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:02:28.421405   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:28.424601   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.424944   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.424972   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.425197   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:28.425370   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.425520   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:28.425644   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	I0915 07:02:28.503791   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0915 07:02:28.503873   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 07:02:28.527264   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0915 07:02:28.527349   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 07:02:28.551098   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0915 07:02:28.551176   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:02:28.574646   26835 provision.go:87] duration metric: took 259.693344ms to configureAuth
	I0915 07:02:28.574675   26835 buildroot.go:189] setting minikube options for container-runtime
	I0915 07:02:28.574894   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:02:28.574983   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:28.577824   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.578168   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.578194   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.578371   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:28.578605   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.578762   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.578892   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:28.579005   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:02:28.579184   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0915 07:02:28.579208   26835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:02:28.795402   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:02:28.795431   26835 main.go:141] libmachine: Checking connection to Docker...
	I0915 07:02:28.795440   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetURL
	I0915 07:02:28.796731   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Using libvirt version 6000000
	I0915 07:02:28.799441   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.799809   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.799839   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.800000   26835 main.go:141] libmachine: Docker is up and running!
	I0915 07:02:28.800012   26835 main.go:141] libmachine: Reticulating splines...
	I0915 07:02:28.800018   26835 client.go:171] duration metric: took 20.833732537s to LocalClient.Create
	I0915 07:02:28.800039   26835 start.go:167] duration metric: took 20.833793606s to libmachine.API.Create "ha-670527"
	I0915 07:02:28.800051   26835 start.go:293] postStartSetup for "ha-670527-m02" (driver="kvm2")
	I0915 07:02:28.800064   26835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:02:28.800086   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:28.800278   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:02:28.800295   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:28.802429   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.802753   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.802779   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.802940   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:28.803104   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.803264   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:28.803366   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	I0915 07:02:28.880649   26835 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:02:28.884603   26835 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 07:02:28.884624   26835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 07:02:28.884686   26835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 07:02:28.884754   26835 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 07:02:28.884767   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /etc/ssl/certs/131902.pem
	I0915 07:02:28.884845   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:02:28.894685   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:02:28.918001   26835 start.go:296] duration metric: took 117.936297ms for postStartSetup
	I0915 07:02:28.918048   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetConfigRaw
	I0915 07:02:28.918617   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:02:28.920944   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.921231   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.921258   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.921446   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:02:28.921629   26835 start.go:128] duration metric: took 20.973719773s to createHost
	I0915 07:02:28.921649   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:28.923851   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.924166   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.924185   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.924338   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:28.924520   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.924676   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.924813   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:28.924953   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:02:28.925114   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0915 07:02:28.925126   26835 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 07:02:29.022483   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726383748.993202938
	
	I0915 07:02:29.022502   26835 fix.go:216] guest clock: 1726383748.993202938
	I0915 07:02:29.022508   26835 fix.go:229] Guest: 2024-09-15 07:02:28.993202938 +0000 UTC Remote: 2024-09-15 07:02:28.921638714 +0000 UTC m=+66.617004315 (delta=71.564224ms)
	I0915 07:02:29.022522   26835 fix.go:200] guest clock delta is within tolerance: 71.564224ms
	I0915 07:02:29.022527   26835 start.go:83] releasing machines lock for "ha-670527-m02", held for 21.074734352s
	I0915 07:02:29.022542   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:29.022820   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:02:29.025216   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:29.025603   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:29.025630   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:29.027979   26835 out.go:177] * Found network options:
	I0915 07:02:29.029139   26835 out.go:177]   - NO_PROXY=192.168.39.54
	W0915 07:02:29.030186   26835 proxy.go:119] fail to check proxy env: Error ip not in block
	I0915 07:02:29.030215   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:29.030670   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:29.030830   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:29.030909   26835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:02:29.030944   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	W0915 07:02:29.031015   26835 proxy.go:119] fail to check proxy env: Error ip not in block
	I0915 07:02:29.031086   26835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:02:29.031108   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:29.033444   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:29.033582   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:29.033857   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:29.033891   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:29.033918   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:29.033936   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:29.034071   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:29.034185   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:29.034271   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:29.034356   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:29.034389   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:29.034517   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:29.034520   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	I0915 07:02:29.034637   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	I0915 07:02:29.274563   26835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 07:02:29.281548   26835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 07:02:29.281626   26835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:02:29.298606   26835 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 07:02:29.298636   26835 start.go:495] detecting cgroup driver to use...
	I0915 07:02:29.298697   26835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:02:29.316035   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:02:29.331209   26835 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:02:29.331268   26835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:02:29.346284   26835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:02:29.360065   26835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:02:29.481409   26835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:02:29.645450   26835 docker.go:233] disabling docker service ...
	I0915 07:02:29.645525   26835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:02:29.660845   26835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:02:29.673836   26835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:02:29.793386   26835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:02:29.917775   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:02:29.932542   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:02:29.951401   26835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:02:29.951456   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:02:29.961788   26835 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:02:29.961858   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:02:29.972394   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:02:29.982699   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:02:29.993216   26835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:02:30.004113   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:02:30.015561   26835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:02:30.033437   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:02:30.044452   26835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:02:30.054254   26835 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0915 07:02:30.054304   26835 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0915 07:02:30.067082   26835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:02:30.076775   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:02:30.191355   26835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:02:30.289201   26835 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:02:30.289276   26835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:02:30.293893   26835 start.go:563] Will wait 60s for crictl version
	I0915 07:02:30.293943   26835 ssh_runner.go:195] Run: which crictl
	I0915 07:02:30.297544   26835 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:02:30.346844   26835 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 07:02:30.346933   26835 ssh_runner.go:195] Run: crio --version
	I0915 07:02:30.380576   26835 ssh_runner.go:195] Run: crio --version
	I0915 07:02:30.411524   26835 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 07:02:30.413233   26835 out.go:177]   - env NO_PROXY=192.168.39.54
	I0915 07:02:30.414608   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:02:30.417050   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:30.417313   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:30.417340   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:30.417499   26835 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0915 07:02:30.421898   26835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:02:30.435272   26835 mustload.go:65] Loading cluster: ha-670527
	I0915 07:02:30.435496   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:02:30.435748   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:30.435784   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:30.450257   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33997
	I0915 07:02:30.450737   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:30.451257   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:30.451281   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:30.451570   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:30.451738   26835 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:02:30.453187   26835 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:02:30.453516   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:30.453553   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:30.467729   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46585
	I0915 07:02:30.468174   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:30.468573   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:30.468592   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:30.468866   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:30.468993   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:02:30.469125   26835 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527 for IP: 192.168.39.222
	I0915 07:02:30.469152   26835 certs.go:194] generating shared ca certs ...
	I0915 07:02:30.469164   26835 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:02:30.469278   26835 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 07:02:30.469314   26835 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 07:02:30.469322   26835 certs.go:256] generating profile certs ...
	I0915 07:02:30.469384   26835 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key
	I0915 07:02:30.469408   26835 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.5e13d2d7
	I0915 07:02:30.469422   26835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.5e13d2d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.222 192.168.39.254]
	I0915 07:02:30.555578   26835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.5e13d2d7 ...
	I0915 07:02:30.555605   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.5e13d2d7: {Name:mk9d3e3970fd43c4cc01395eb4af6ffaf9bbfa94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:02:30.555762   26835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.5e13d2d7 ...
	I0915 07:02:30.555774   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.5e13d2d7: {Name:mkdb7ccda7f27e402ed4041657e1289ce0e105a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:02:30.555835   26835 certs.go:381] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.5e13d2d7 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt
	I0915 07:02:30.555958   26835 certs.go:385] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.5e13d2d7 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key
	I0915 07:02:30.556078   26835 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key
	I0915 07:02:30.556092   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0915 07:02:30.556105   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0915 07:02:30.556118   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0915 07:02:30.556130   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0915 07:02:30.556149   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0915 07:02:30.556163   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0915 07:02:30.556175   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0915 07:02:30.556192   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0915 07:02:30.556238   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 07:02:30.556265   26835 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 07:02:30.556276   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:02:30.556301   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:02:30.556322   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:02:30.556344   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 07:02:30.556381   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:02:30.556404   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:02:30.556418   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem -> /usr/share/ca-certificates/13190.pem
	I0915 07:02:30.556430   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /usr/share/ca-certificates/131902.pem
	I0915 07:02:30.556459   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:02:30.559065   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:30.559349   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:02:30.559367   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:30.559524   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:02:30.559699   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:02:30.559800   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:02:30.559886   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:02:30.634233   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0915 07:02:30.639638   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0915 07:02:30.651969   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0915 07:02:30.656344   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0915 07:02:30.670370   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0915 07:02:30.674990   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0915 07:02:30.685502   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0915 07:02:30.689789   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0915 07:02:30.701427   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0915 07:02:30.705820   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0915 07:02:30.716115   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0915 07:02:30.720165   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0915 07:02:30.730491   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:02:30.758776   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:02:30.786053   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:02:30.812918   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 07:02:30.839709   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0915 07:02:30.865241   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 07:02:30.887692   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 07:02:30.909831   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 07:02:30.932076   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:02:30.954043   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 07:02:30.980964   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 07:02:31.007544   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0915 07:02:31.025713   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0915 07:02:31.043734   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0915 07:02:31.061392   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0915 07:02:31.079044   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0915 07:02:31.096440   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0915 07:02:31.114730   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0915 07:02:31.133403   26835 ssh_runner.go:195] Run: openssl version
	I0915 07:02:31.139205   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:02:31.150172   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:02:31.155101   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:02:31.155163   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:02:31.160723   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:02:31.171690   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 07:02:31.182811   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 07:02:31.187381   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 07:02:31.187428   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 07:02:31.193069   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 07:02:31.203749   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 07:02:31.214303   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 07:02:31.219142   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 07:02:31.219208   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 07:02:31.225126   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:02:31.236094   26835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:02:31.240285   26835 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 07:02:31.240331   26835 kubeadm.go:934] updating node {m02 192.168.39.222 8443 v1.31.1 crio true true} ...
	I0915 07:02:31.240423   26835 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-670527-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:02:31.240456   26835 kube-vip.go:115] generating kube-vip config ...
	I0915 07:02:31.240499   26835 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0915 07:02:31.257343   26835 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0915 07:02:31.257420   26835 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0915 07:02:31.257479   26835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 07:02:31.267300   26835 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0915 07:02:31.267363   26835 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0915 07:02:31.276806   26835 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0915 07:02:31.276830   26835 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0915 07:02:31.276844   26835 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0915 07:02:31.276832   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0915 07:02:31.276965   26835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0915 07:02:31.281259   26835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0915 07:02:31.281285   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0915 07:02:33.423127   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0915 07:02:33.423198   26835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0915 07:02:33.428293   26835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0915 07:02:33.428323   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0915 07:02:34.469788   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:02:34.485662   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0915 07:02:34.485758   26835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0915 07:02:34.490171   26835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0915 07:02:34.490205   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0915 07:02:34.799607   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0915 07:02:34.809569   26835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0915 07:02:34.827915   26835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:02:34.845023   26835 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0915 07:02:34.861258   26835 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0915 07:02:34.865438   26835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:02:34.877732   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:02:35.000696   26835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:02:35.018898   26835 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:02:35.019383   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:35.019436   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:35.034104   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45453
	I0915 07:02:35.034487   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:35.034941   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:35.034958   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:35.035235   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:35.035476   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:02:35.035625   26835 start.go:317] joinCluster: &{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:02:35.035755   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0915 07:02:35.035775   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:02:35.038626   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:35.038972   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:02:35.038996   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:35.039145   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:02:35.039311   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:02:35.039444   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:02:35.039578   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:02:35.208601   26835 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:02:35.208645   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 50hbpr.238ifb3e9gglapy2 --discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-670527-m02 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443"
	I0915 07:02:57.207397   26835 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 50hbpr.238ifb3e9gglapy2 --discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-670527-m02 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443": (21.998728599s)
	I0915 07:02:57.207432   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0915 07:02:57.776899   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-670527-m02 minikube.k8s.io/updated_at=2024_09_15T07_02_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=ha-670527 minikube.k8s.io/primary=false
	I0915 07:02:57.893456   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-670527-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0915 07:02:58.002512   26835 start.go:319] duration metric: took 22.966886384s to joinCluster
	I0915 07:02:58.002576   26835 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:02:58.002874   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:02:58.004496   26835 out.go:177] * Verifying Kubernetes components...
	I0915 07:02:58.005948   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:02:58.281369   26835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:02:58.297533   26835 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:02:58.297786   26835 kapi.go:59] client config for ha-670527: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.crt", KeyFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key", CAFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0915 07:02:58.297873   26835 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.54:8443
	I0915 07:02:58.298084   26835 node_ready.go:35] waiting up to 6m0s for node "ha-670527-m02" to be "Ready" ...
	I0915 07:02:58.298195   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:02:58.298206   26835 round_trippers.go:469] Request Headers:
	I0915 07:02:58.298217   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:02:58.298224   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:02:58.309029   26835 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0915 07:02:58.798950   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:02:58.798970   26835 round_trippers.go:469] Request Headers:
	I0915 07:02:58.798977   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:02:58.798981   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:02:58.803287   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:02:59.298330   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:02:59.298355   26835 round_trippers.go:469] Request Headers:
	I0915 07:02:59.298363   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:02:59.298366   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:02:59.302108   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:02:59.799045   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:02:59.799070   26835 round_trippers.go:469] Request Headers:
	I0915 07:02:59.799087   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:02:59.799093   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:02:59.803446   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:03:00.299017   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:00.299038   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:00.299048   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:00.299055   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:00.303017   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:00.303922   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:00.799140   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:00.799164   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:00.799175   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:00.799180   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:00.802947   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:01.298949   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:01.298969   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:01.298976   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:01.298980   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:01.302732   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:01.798310   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:01.798330   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:01.798338   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:01.798343   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:01.801085   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:02.299204   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:02.299224   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:02.299232   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:02.299235   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:02.302816   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:02.798453   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:02.798473   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:02.798481   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:02.798485   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:02.802331   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:02.802892   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:03.298701   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:03.298740   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:03.298751   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:03.298757   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:03.301969   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:03.799044   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:03.799065   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:03.799073   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:03.799077   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:03.802064   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:04.299042   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:04.299062   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:04.299070   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:04.299074   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:04.302546   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:04.798566   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:04.798588   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:04.798599   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:04.798603   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:04.802128   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:05.298324   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:05.298343   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:05.298351   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:05.298355   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:05.301627   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:05.304686   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:05.798531   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:05.798552   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:05.798560   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:05.798565   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:05.805362   26835 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:03:06.298962   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:06.298986   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:06.298994   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:06.298999   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:06.301984   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:06.799027   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:06.799049   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:06.799059   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:06.799064   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:06.805682   26835 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:03:07.298899   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:07.298920   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:07.298927   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:07.298930   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:07.302021   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:07.798423   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:07.798449   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:07.798457   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:07.798465   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:07.801464   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:07.802365   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:08.299098   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:08.299117   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:08.299124   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:08.299129   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:08.301958   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:08.799072   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:08.799096   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:08.799105   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:08.799110   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:08.802190   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:09.299221   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:09.299242   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:09.299251   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:09.299254   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:09.302314   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:09.799051   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:09.799073   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:09.799081   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:09.799087   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:09.802695   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:09.803308   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:10.299120   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:10.299143   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:10.299152   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:10.299160   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:10.302205   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:10.799023   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:10.799045   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:10.799055   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:10.799062   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:10.802433   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:11.298625   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:11.298644   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:11.298652   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:11.298656   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:11.301528   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:11.799072   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:11.799096   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:11.799107   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:11.799113   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:11.801945   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:12.299041   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:12.299062   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:12.299070   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:12.299091   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:12.302359   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:12.302978   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:12.799029   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:12.799052   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:12.799059   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:12.799063   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:12.802509   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:13.298858   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:13.298879   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:13.298886   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:13.298891   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:13.302105   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:13.799152   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:13.799172   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:13.799180   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:13.799184   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:13.802280   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:14.299044   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:14.299063   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:14.299071   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:14.299074   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:14.301972   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:14.799071   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:14.799094   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:14.799103   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:14.799110   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:14.802858   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:14.803543   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:15.298423   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:15.298443   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:15.298450   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:15.298456   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:15.301463   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:15.799040   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:15.799060   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:15.799067   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:15.799071   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:15.802044   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:16.299050   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:16.299073   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:16.299085   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:16.299091   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:16.302207   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:16.799031   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:16.799051   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:16.799058   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:16.799061   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:16.802023   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.299041   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:17.299066   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.299076   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.299081   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.301770   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.302269   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:17.799048   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:17.799068   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.799076   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.799080   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.802248   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:17.802880   26835 node_ready.go:49] node "ha-670527-m02" has status "Ready":"True"
	I0915 07:03:17.802897   26835 node_ready.go:38] duration metric: took 19.504788225s for node "ha-670527-m02" to be "Ready" ...
	I0915 07:03:17.802907   26835 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:03:17.803009   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:03:17.803023   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.803033   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.803037   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.807239   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:03:17.813322   26835 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4w6x7" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.813405   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4w6x7
	I0915 07:03:17.813416   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.813426   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.813432   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.816253   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.817178   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:17.817193   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.817201   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.817206   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.819382   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.819889   26835 pod_ready.go:93] pod "coredns-7c65d6cfc9-4w6x7" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:17.819905   26835 pod_ready.go:82] duration metric: took 6.561965ms for pod "coredns-7c65d6cfc9-4w6x7" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.819916   26835 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lpj44" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.819970   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-lpj44
	I0915 07:03:17.819979   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.819989   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.819995   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.822316   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.823272   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:17.823286   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.823293   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.823297   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.825230   26835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0915 07:03:17.825653   26835 pod_ready.go:93] pod "coredns-7c65d6cfc9-lpj44" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:17.825667   26835 pod_ready.go:82] duration metric: took 5.744951ms for pod "coredns-7c65d6cfc9-lpj44" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.825675   26835 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.825716   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-670527
	I0915 07:03:17.825723   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.825730   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.825733   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.827910   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.828335   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:17.828349   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.828357   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.828361   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.830477   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.830858   26835 pod_ready.go:93] pod "etcd-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:17.830872   26835 pod_ready.go:82] duration metric: took 5.191041ms for pod "etcd-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.830880   26835 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.830918   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-670527-m02
	I0915 07:03:17.830928   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.830935   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.830940   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.833032   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.833460   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:17.833473   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.833480   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.833483   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.835371   26835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0915 07:03:17.835725   26835 pod_ready.go:93] pod "etcd-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:17.835739   26835 pod_ready.go:82] duration metric: took 4.853737ms for pod "etcd-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.835751   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.999289   26835 request.go:632] Waited for 163.492142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527
	I0915 07:03:17.999360   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527
	I0915 07:03:17.999371   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.999381   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.999393   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:18.003149   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:18.199247   26835 request.go:632] Waited for 195.2673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:18.199321   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:18.199328   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:18.199338   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:18.199350   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:18.202285   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:18.202824   26835 pod_ready.go:93] pod "kube-apiserver-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:18.202840   26835 pod_ready.go:82] duration metric: took 367.082845ms for pod "kube-apiserver-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:18.202849   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:18.400062   26835 request.go:632] Waited for 197.14969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527-m02
	I0915 07:03:18.400162   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527-m02
	I0915 07:03:18.400174   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:18.400185   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:18.400192   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:18.403454   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:18.599574   26835 request.go:632] Waited for 195.382614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:18.599625   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:18.599632   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:18.599645   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:18.599651   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:18.606574   26835 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:03:18.607107   26835 pod_ready.go:93] pod "kube-apiserver-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:18.607125   26835 pod_ready.go:82] duration metric: took 404.270298ms for pod "kube-apiserver-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:18.607134   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:18.799081   26835 request.go:632] Waited for 191.883757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527
	I0915 07:03:18.799145   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527
	I0915 07:03:18.799151   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:18.799158   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:18.799162   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:18.802381   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:18.999738   26835 request.go:632] Waited for 196.363128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:18.999821   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:18.999832   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:18.999840   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:18.999844   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:19.003038   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:19.003594   26835 pod_ready.go:93] pod "kube-controller-manager-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:19.003613   26835 pod_ready.go:82] duration metric: took 396.471292ms for pod "kube-controller-manager-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:19.003628   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:19.199678   26835 request.go:632] Waited for 195.975884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527-m02
	I0915 07:03:19.199745   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527-m02
	I0915 07:03:19.199752   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:19.199761   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:19.199768   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:19.203357   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:19.399417   26835 request.go:632] Waited for 195.353477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:19.399506   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:19.399518   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:19.399528   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:19.399535   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:19.402623   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:19.403171   26835 pod_ready.go:93] pod "kube-controller-manager-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:19.403193   26835 pod_ready.go:82] duration metric: took 399.556435ms for pod "kube-controller-manager-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:19.403206   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25xtk" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:19.599292   26835 request.go:632] Waited for 196.019957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25xtk
	I0915 07:03:19.599372   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25xtk
	I0915 07:03:19.599383   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:19.599394   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:19.599403   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:19.602327   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:19.799354   26835 request.go:632] Waited for 196.344034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:19.799408   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:19.799413   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:19.799420   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:19.799423   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:19.802227   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:19.803970   26835 pod_ready.go:93] pod "kube-proxy-25xtk" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:19.803991   26835 pod_ready.go:82] duration metric: took 400.772903ms for pod "kube-proxy-25xtk" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:19.804002   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kt79t" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:19.999969   26835 request.go:632] Waited for 195.901993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kt79t
	I0915 07:03:20.000067   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kt79t
	I0915 07:03:20.000076   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:20.000086   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:20.000096   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:20.003916   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:20.199923   26835 request.go:632] Waited for 195.280331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:20.199979   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:20.199986   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:20.199996   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:20.200001   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:20.203332   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:20.203957   26835 pod_ready.go:93] pod "kube-proxy-kt79t" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:20.203977   26835 pod_ready.go:82] duration metric: took 399.967571ms for pod "kube-proxy-kt79t" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:20.203989   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:20.399069   26835 request.go:632] Waited for 195.010415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527
	I0915 07:03:20.399130   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527
	I0915 07:03:20.399136   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:20.399143   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:20.399146   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:20.403009   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:20.599403   26835 request.go:632] Waited for 195.788748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:20.599463   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:20.599471   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:20.599480   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:20.599485   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:20.602055   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:20.602618   26835 pod_ready.go:93] pod "kube-scheduler-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:20.602636   26835 pod_ready.go:82] duration metric: took 398.640734ms for pod "kube-scheduler-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:20.602646   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:20.799552   26835 request.go:632] Waited for 196.846292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527-m02
	I0915 07:03:20.799620   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527-m02
	I0915 07:03:20.799627   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:20.799634   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:20.799638   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:20.802765   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:20.999660   26835 request.go:632] Waited for 196.342764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:20.999732   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:20.999738   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:20.999744   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:20.999747   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:21.002704   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:21.003350   26835 pod_ready.go:93] pod "kube-scheduler-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:21.003385   26835 pod_ready.go:82] duration metric: took 400.731335ms for pod "kube-scheduler-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:21.003401   26835 pod_ready.go:39] duration metric: took 3.200461526s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:03:21.003421   26835 api_server.go:52] waiting for apiserver process to appear ...
	I0915 07:03:21.003481   26835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:03:21.021041   26835 api_server.go:72] duration metric: took 23.018438257s to wait for apiserver process to appear ...
	I0915 07:03:21.021070   26835 api_server.go:88] waiting for apiserver healthz status ...
	I0915 07:03:21.021102   26835 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0915 07:03:21.028517   26835 api_server.go:279] https://192.168.39.54:8443/healthz returned 200:
	ok
	I0915 07:03:21.028579   26835 round_trippers.go:463] GET https://192.168.39.54:8443/version
	I0915 07:03:21.028587   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:21.028594   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:21.028598   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:21.029612   26835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0915 07:03:21.029684   26835 api_server.go:141] control plane version: v1.31.1
	I0915 07:03:21.029697   26835 api_server.go:131] duration metric: took 8.620595ms to wait for apiserver health ...
	I0915 07:03:21.029704   26835 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 07:03:21.200146   26835 request.go:632] Waited for 170.344732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:03:21.200201   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:03:21.200207   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:21.200214   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:21.200219   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:21.205210   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:03:21.209322   26835 system_pods.go:59] 17 kube-system pods found
	I0915 07:03:21.209349   26835 system_pods.go:61] "coredns-7c65d6cfc9-4w6x7" [b61b0aa7-48e9-4746-b2e9-d205b96fe557] Running
	I0915 07:03:21.209355   26835 system_pods.go:61] "coredns-7c65d6cfc9-lpj44" [a4a8f34c-c73f-411b-9773-18e274a3987f] Running
	I0915 07:03:21.209358   26835 system_pods.go:61] "etcd-ha-670527" [d7fd260a-bb00-4f30-8e27-ae79ab568428] Running
	I0915 07:03:21.209362   26835 system_pods.go:61] "etcd-ha-670527-m02" [91839d6d-2280-4850-bc47-0de42a8bd3ee] Running
	I0915 07:03:21.209365   26835 system_pods.go:61] "kindnet-6sqhd" [8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2] Running
	I0915 07:03:21.209369   26835 system_pods.go:61] "kindnet-mn54b" [c413e4cd-9033-4f8d-ac98-5a641b14fe78] Running
	I0915 07:03:21.209372   26835 system_pods.go:61] "kube-apiserver-ha-670527" [2da91baa-de79-4304-9256-45771efa0825] Running
	I0915 07:03:21.209375   26835 system_pods.go:61] "kube-apiserver-ha-670527-m02" [406bb0a9-8e75-41c9-8f88-10d10b8fb327] Running
	I0915 07:03:21.209378   26835 system_pods.go:61] "kube-controller-manager-ha-670527" [aa981100-fd20-40e8-8449-b4332efc086d] Running
	I0915 07:03:21.209381   26835 system_pods.go:61] "kube-controller-manager-ha-670527-m02" [0e833c15-24c8-4a35-8c4e-58fe1eaa6600] Running
	I0915 07:03:21.209384   26835 system_pods.go:61] "kube-proxy-25xtk" [c9955046-49ba-426d-9377-8d3e02fd3f37] Running
	I0915 07:03:21.209386   26835 system_pods.go:61] "kube-proxy-kt79t" [9ae503da-976f-4f63-9a70-c1899bb990e7] Running
	I0915 07:03:21.209389   26835 system_pods.go:61] "kube-scheduler-ha-670527" [085277d2-c1ce-4a47-9b73-47961e3d13d9] Running
	I0915 07:03:21.209393   26835 system_pods.go:61] "kube-scheduler-ha-670527-m02" [a88ee5e5-13cb-4160-b654-0af177d55cd5] Running
	I0915 07:03:21.209399   26835 system_pods.go:61] "kube-vip-ha-670527" [3ad87a12-7eca-44cb-8b2f-df38f92d8e4d] Running
	I0915 07:03:21.209402   26835 system_pods.go:61] "kube-vip-ha-670527-m02" [c02df8e9-056b-4028-9af5-1c4b8e42e780] Running
	I0915 07:03:21.209404   26835 system_pods.go:61] "storage-provisioner" [62afc380-282c-4392-9ff9-7531ab5e74d1] Running
	I0915 07:03:21.209410   26835 system_pods.go:74] duration metric: took 179.701914ms to wait for pod list to return data ...
	I0915 07:03:21.209420   26835 default_sa.go:34] waiting for default service account to be created ...
	I0915 07:03:21.399918   26835 request.go:632] Waited for 190.415031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0915 07:03:21.399974   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0915 07:03:21.399979   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:21.399993   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:21.399998   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:21.404183   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:03:21.404968   26835 default_sa.go:45] found service account: "default"
	I0915 07:03:21.404990   26835 default_sa.go:55] duration metric: took 195.564704ms for default service account to be created ...
	I0915 07:03:21.405001   26835 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 07:03:21.599538   26835 request.go:632] Waited for 194.456381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:03:21.599591   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:03:21.599596   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:21.599606   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:21.599610   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:21.604857   26835 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:03:21.608974   26835 system_pods.go:86] 17 kube-system pods found
	I0915 07:03:21.609002   26835 system_pods.go:89] "coredns-7c65d6cfc9-4w6x7" [b61b0aa7-48e9-4746-b2e9-d205b96fe557] Running
	I0915 07:03:21.609010   26835 system_pods.go:89] "coredns-7c65d6cfc9-lpj44" [a4a8f34c-c73f-411b-9773-18e274a3987f] Running
	I0915 07:03:21.609016   26835 system_pods.go:89] "etcd-ha-670527" [d7fd260a-bb00-4f30-8e27-ae79ab568428] Running
	I0915 07:03:21.609021   26835 system_pods.go:89] "etcd-ha-670527-m02" [91839d6d-2280-4850-bc47-0de42a8bd3ee] Running
	I0915 07:03:21.609026   26835 system_pods.go:89] "kindnet-6sqhd" [8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2] Running
	I0915 07:03:21.609030   26835 system_pods.go:89] "kindnet-mn54b" [c413e4cd-9033-4f8d-ac98-5a641b14fe78] Running
	I0915 07:03:21.609036   26835 system_pods.go:89] "kube-apiserver-ha-670527" [2da91baa-de79-4304-9256-45771efa0825] Running
	I0915 07:03:21.609042   26835 system_pods.go:89] "kube-apiserver-ha-670527-m02" [406bb0a9-8e75-41c9-8f88-10d10b8fb327] Running
	I0915 07:03:21.609048   26835 system_pods.go:89] "kube-controller-manager-ha-670527" [aa981100-fd20-40e8-8449-b4332efc086d] Running
	I0915 07:03:21.609054   26835 system_pods.go:89] "kube-controller-manager-ha-670527-m02" [0e833c15-24c8-4a35-8c4e-58fe1eaa6600] Running
	I0915 07:03:21.609061   26835 system_pods.go:89] "kube-proxy-25xtk" [c9955046-49ba-426d-9377-8d3e02fd3f37] Running
	I0915 07:03:21.609069   26835 system_pods.go:89] "kube-proxy-kt79t" [9ae503da-976f-4f63-9a70-c1899bb990e7] Running
	I0915 07:03:21.609075   26835 system_pods.go:89] "kube-scheduler-ha-670527" [085277d2-c1ce-4a47-9b73-47961e3d13d9] Running
	I0915 07:03:21.609081   26835 system_pods.go:89] "kube-scheduler-ha-670527-m02" [a88ee5e5-13cb-4160-b654-0af177d55cd5] Running
	I0915 07:03:21.609087   26835 system_pods.go:89] "kube-vip-ha-670527" [3ad87a12-7eca-44cb-8b2f-df38f92d8e4d] Running
	I0915 07:03:21.609093   26835 system_pods.go:89] "kube-vip-ha-670527-m02" [c02df8e9-056b-4028-9af5-1c4b8e42e780] Running
	I0915 07:03:21.609099   26835 system_pods.go:89] "storage-provisioner" [62afc380-282c-4392-9ff9-7531ab5e74d1] Running
	I0915 07:03:21.609110   26835 system_pods.go:126] duration metric: took 204.103519ms to wait for k8s-apps to be running ...
	I0915 07:03:21.609130   26835 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 07:03:21.609180   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:03:21.625587   26835 system_svc.go:56] duration metric: took 16.446998ms WaitForService to wait for kubelet
	I0915 07:03:21.625619   26835 kubeadm.go:582] duration metric: took 23.623022618s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:03:21.625636   26835 node_conditions.go:102] verifying NodePressure condition ...
	I0915 07:03:21.800081   26835 request.go:632] Waited for 174.329572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes
	I0915 07:03:21.800145   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes
	I0915 07:03:21.800153   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:21.800164   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:21.800174   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:21.804174   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:21.804998   26835 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 07:03:21.805032   26835 node_conditions.go:123] node cpu capacity is 2
	I0915 07:03:21.805052   26835 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 07:03:21.805057   26835 node_conditions.go:123] node cpu capacity is 2
	I0915 07:03:21.805063   26835 node_conditions.go:105] duration metric: took 179.422133ms to run NodePressure ...
	I0915 07:03:21.805076   26835 start.go:241] waiting for startup goroutines ...
	I0915 07:03:21.805110   26835 start.go:255] writing updated cluster config ...
	I0915 07:03:21.807329   26835 out.go:201] 
	I0915 07:03:21.808633   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:03:21.808730   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:03:21.810021   26835 out.go:177] * Starting "ha-670527-m03" control-plane node in "ha-670527" cluster
	I0915 07:03:21.811002   26835 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:03:21.811018   26835 cache.go:56] Caching tarball of preloaded images
	I0915 07:03:21.811099   26835 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:03:21.811110   26835 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:03:21.811213   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:03:21.811397   26835 start.go:360] acquireMachinesLock for ha-670527-m03: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:03:21.811447   26835 start.go:364] duration metric: took 30.463µs to acquireMachinesLock for "ha-670527-m03"
	I0915 07:03:21.811468   26835 start.go:93] Provisioning new machine with config: &{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:03:21.811593   26835 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0915 07:03:21.813055   26835 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 07:03:21.813128   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:03:21.813160   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:03:21.827379   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34497
	I0915 07:03:21.827819   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:03:21.828285   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:03:21.828304   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:03:21.828594   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:03:21.828770   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetMachineName
	I0915 07:03:21.828896   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:21.829034   26835 start.go:159] libmachine.API.Create for "ha-670527" (driver="kvm2")
	I0915 07:03:21.829062   26835 client.go:168] LocalClient.Create starting
	I0915 07:03:21.829084   26835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem
	I0915 07:03:21.829112   26835 main.go:141] libmachine: Decoding PEM data...
	I0915 07:03:21.829125   26835 main.go:141] libmachine: Parsing certificate...
	I0915 07:03:21.829180   26835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem
	I0915 07:03:21.829198   26835 main.go:141] libmachine: Decoding PEM data...
	I0915 07:03:21.829208   26835 main.go:141] libmachine: Parsing certificate...
	I0915 07:03:21.829220   26835 main.go:141] libmachine: Running pre-create checks...
	I0915 07:03:21.829228   26835 main.go:141] libmachine: (ha-670527-m03) Calling .PreCreateCheck
	I0915 07:03:21.829350   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetConfigRaw
	I0915 07:03:21.829692   26835 main.go:141] libmachine: Creating machine...
	I0915 07:03:21.829705   26835 main.go:141] libmachine: (ha-670527-m03) Calling .Create
	I0915 07:03:21.829836   26835 main.go:141] libmachine: (ha-670527-m03) Creating KVM machine...
	I0915 07:03:21.830982   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found existing default KVM network
	I0915 07:03:21.831136   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found existing private KVM network mk-ha-670527
	I0915 07:03:21.831307   26835 main.go:141] libmachine: (ha-670527-m03) Setting up store path in /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03 ...
	I0915 07:03:21.831328   26835 main.go:141] libmachine: (ha-670527-m03) Building disk image from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso
	I0915 07:03:21.831398   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:21.831307   27575 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:03:21.831470   26835 main.go:141] libmachine: (ha-670527-m03) Downloading /home/jenkins/minikube-integration/19644-6166/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso...
	I0915 07:03:22.066896   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:22.066781   27575 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa...
	I0915 07:03:22.155557   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:22.155430   27575 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/ha-670527-m03.rawdisk...
	I0915 07:03:22.155590   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Writing magic tar header
	I0915 07:03:22.155600   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Writing SSH key tar header
	I0915 07:03:22.155608   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:22.155559   27575 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03 ...
	I0915 07:03:22.155677   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03
	I0915 07:03:22.155713   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines
	I0915 07:03:22.155729   26835 main.go:141] libmachine: (ha-670527-m03) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03 (perms=drwx------)
	I0915 07:03:22.155739   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:03:22.155750   26835 main.go:141] libmachine: (ha-670527-m03) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines (perms=drwxr-xr-x)
	I0915 07:03:22.155763   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166
	I0915 07:03:22.155775   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0915 07:03:22.155780   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Checking permissions on dir: /home/jenkins
	I0915 07:03:22.155786   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Checking permissions on dir: /home
	I0915 07:03:22.155793   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Skipping /home - not owner
	I0915 07:03:22.155841   26835 main.go:141] libmachine: (ha-670527-m03) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube (perms=drwxr-xr-x)
	I0915 07:03:22.155872   26835 main.go:141] libmachine: (ha-670527-m03) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166 (perms=drwxrwxr-x)
	I0915 07:03:22.155894   26835 main.go:141] libmachine: (ha-670527-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0915 07:03:22.155910   26835 main.go:141] libmachine: (ha-670527-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0915 07:03:22.155933   26835 main.go:141] libmachine: (ha-670527-m03) Creating domain...
	I0915 07:03:22.156757   26835 main.go:141] libmachine: (ha-670527-m03) define libvirt domain using xml: 
	I0915 07:03:22.156768   26835 main.go:141] libmachine: (ha-670527-m03) <domain type='kvm'>
	I0915 07:03:22.156795   26835 main.go:141] libmachine: (ha-670527-m03)   <name>ha-670527-m03</name>
	I0915 07:03:22.156819   26835 main.go:141] libmachine: (ha-670527-m03)   <memory unit='MiB'>2200</memory>
	I0915 07:03:22.156847   26835 main.go:141] libmachine: (ha-670527-m03)   <vcpu>2</vcpu>
	I0915 07:03:22.156869   26835 main.go:141] libmachine: (ha-670527-m03)   <features>
	I0915 07:03:22.156892   26835 main.go:141] libmachine: (ha-670527-m03)     <acpi/>
	I0915 07:03:22.156902   26835 main.go:141] libmachine: (ha-670527-m03)     <apic/>
	I0915 07:03:22.156909   26835 main.go:141] libmachine: (ha-670527-m03)     <pae/>
	I0915 07:03:22.156915   26835 main.go:141] libmachine: (ha-670527-m03)     
	I0915 07:03:22.156922   26835 main.go:141] libmachine: (ha-670527-m03)   </features>
	I0915 07:03:22.156933   26835 main.go:141] libmachine: (ha-670527-m03)   <cpu mode='host-passthrough'>
	I0915 07:03:22.156940   26835 main.go:141] libmachine: (ha-670527-m03)   
	I0915 07:03:22.156950   26835 main.go:141] libmachine: (ha-670527-m03)   </cpu>
	I0915 07:03:22.156969   26835 main.go:141] libmachine: (ha-670527-m03)   <os>
	I0915 07:03:22.156987   26835 main.go:141] libmachine: (ha-670527-m03)     <type>hvm</type>
	I0915 07:03:22.156999   26835 main.go:141] libmachine: (ha-670527-m03)     <boot dev='cdrom'/>
	I0915 07:03:22.157009   26835 main.go:141] libmachine: (ha-670527-m03)     <boot dev='hd'/>
	I0915 07:03:22.157017   26835 main.go:141] libmachine: (ha-670527-m03)     <bootmenu enable='no'/>
	I0915 07:03:22.157025   26835 main.go:141] libmachine: (ha-670527-m03)   </os>
	I0915 07:03:22.157035   26835 main.go:141] libmachine: (ha-670527-m03)   <devices>
	I0915 07:03:22.157043   26835 main.go:141] libmachine: (ha-670527-m03)     <disk type='file' device='cdrom'>
	I0915 07:03:22.157056   26835 main.go:141] libmachine: (ha-670527-m03)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/boot2docker.iso'/>
	I0915 07:03:22.157071   26835 main.go:141] libmachine: (ha-670527-m03)       <target dev='hdc' bus='scsi'/>
	I0915 07:03:22.157083   26835 main.go:141] libmachine: (ha-670527-m03)       <readonly/>
	I0915 07:03:22.157093   26835 main.go:141] libmachine: (ha-670527-m03)     </disk>
	I0915 07:03:22.157102   26835 main.go:141] libmachine: (ha-670527-m03)     <disk type='file' device='disk'>
	I0915 07:03:22.157114   26835 main.go:141] libmachine: (ha-670527-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0915 07:03:22.157127   26835 main.go:141] libmachine: (ha-670527-m03)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/ha-670527-m03.rawdisk'/>
	I0915 07:03:22.157135   26835 main.go:141] libmachine: (ha-670527-m03)       <target dev='hda' bus='virtio'/>
	I0915 07:03:22.157154   26835 main.go:141] libmachine: (ha-670527-m03)     </disk>
	I0915 07:03:22.157170   26835 main.go:141] libmachine: (ha-670527-m03)     <interface type='network'>
	I0915 07:03:22.157182   26835 main.go:141] libmachine: (ha-670527-m03)       <source network='mk-ha-670527'/>
	I0915 07:03:22.157192   26835 main.go:141] libmachine: (ha-670527-m03)       <model type='virtio'/>
	I0915 07:03:22.157200   26835 main.go:141] libmachine: (ha-670527-m03)     </interface>
	I0915 07:03:22.157207   26835 main.go:141] libmachine: (ha-670527-m03)     <interface type='network'>
	I0915 07:03:22.157213   26835 main.go:141] libmachine: (ha-670527-m03)       <source network='default'/>
	I0915 07:03:22.157219   26835 main.go:141] libmachine: (ha-670527-m03)       <model type='virtio'/>
	I0915 07:03:22.157224   26835 main.go:141] libmachine: (ha-670527-m03)     </interface>
	I0915 07:03:22.157228   26835 main.go:141] libmachine: (ha-670527-m03)     <serial type='pty'>
	I0915 07:03:22.157233   26835 main.go:141] libmachine: (ha-670527-m03)       <target port='0'/>
	I0915 07:03:22.157242   26835 main.go:141] libmachine: (ha-670527-m03)     </serial>
	I0915 07:03:22.157249   26835 main.go:141] libmachine: (ha-670527-m03)     <console type='pty'>
	I0915 07:03:22.157254   26835 main.go:141] libmachine: (ha-670527-m03)       <target type='serial' port='0'/>
	I0915 07:03:22.157261   26835 main.go:141] libmachine: (ha-670527-m03)     </console>
	I0915 07:03:22.157265   26835 main.go:141] libmachine: (ha-670527-m03)     <rng model='virtio'>
	I0915 07:03:22.157274   26835 main.go:141] libmachine: (ha-670527-m03)       <backend model='random'>/dev/random</backend>
	I0915 07:03:22.157280   26835 main.go:141] libmachine: (ha-670527-m03)     </rng>
	I0915 07:03:22.157286   26835 main.go:141] libmachine: (ha-670527-m03)     
	I0915 07:03:22.157290   26835 main.go:141] libmachine: (ha-670527-m03)     
	I0915 07:03:22.157297   26835 main.go:141] libmachine: (ha-670527-m03)   </devices>
	I0915 07:03:22.157301   26835 main.go:141] libmachine: (ha-670527-m03) </domain>
	I0915 07:03:22.157310   26835 main.go:141] libmachine: (ha-670527-m03) 
	I0915 07:03:22.163801   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:1a:da:d7 in network default
	I0915 07:03:22.164325   26835 main.go:141] libmachine: (ha-670527-m03) Ensuring networks are active...
	I0915 07:03:22.164341   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:22.165076   26835 main.go:141] libmachine: (ha-670527-m03) Ensuring network default is active
	I0915 07:03:22.165412   26835 main.go:141] libmachine: (ha-670527-m03) Ensuring network mk-ha-670527 is active
	I0915 07:03:22.165935   26835 main.go:141] libmachine: (ha-670527-m03) Getting domain xml...
	I0915 07:03:22.166619   26835 main.go:141] libmachine: (ha-670527-m03) Creating domain...
	I0915 07:03:23.403097   26835 main.go:141] libmachine: (ha-670527-m03) Waiting to get IP...
	I0915 07:03:23.404077   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:23.404560   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:23.404596   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:23.404542   27575 retry.go:31] will retry after 216.027867ms: waiting for machine to come up
	I0915 07:03:23.622217   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:23.622712   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:23.622739   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:23.622650   27575 retry.go:31] will retry after 379.106761ms: waiting for machine to come up
	I0915 07:03:24.002939   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:24.003411   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:24.003467   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:24.003366   27575 retry.go:31] will retry after 293.965798ms: waiting for machine to come up
	I0915 07:03:24.298820   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:24.299267   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:24.299299   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:24.299222   27575 retry.go:31] will retry after 496.993891ms: waiting for machine to come up
	I0915 07:03:24.798010   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:24.798485   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:24.798512   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:24.798399   27575 retry.go:31] will retry after 681.561294ms: waiting for machine to come up
	I0915 07:03:25.481130   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:25.481859   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:25.481880   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:25.481822   27575 retry.go:31] will retry after 816.437613ms: waiting for machine to come up
	I0915 07:03:26.299463   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:26.299923   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:26.299949   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:26.299880   27575 retry.go:31] will retry after 933.139751ms: waiting for machine to come up
	I0915 07:03:27.234824   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:27.235283   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:27.235305   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:27.235231   27575 retry.go:31] will retry after 1.01772382s: waiting for machine to come up
	I0915 07:03:28.254301   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:28.254706   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:28.254734   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:28.254660   27575 retry.go:31] will retry after 1.647555623s: waiting for machine to come up
	I0915 07:03:29.904388   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:29.904947   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:29.904974   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:29.904882   27575 retry.go:31] will retry after 1.501301991s: waiting for machine to come up
	I0915 07:03:31.407599   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:31.407990   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:31.408023   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:31.407965   27575 retry.go:31] will retry after 1.860767384s: waiting for machine to come up
	I0915 07:03:33.270491   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:33.271016   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:33.271038   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:33.270970   27575 retry.go:31] will retry after 2.482506082s: waiting for machine to come up
	I0915 07:03:35.756546   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:35.756901   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:35.756923   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:35.756875   27575 retry.go:31] will retry after 3.598234046s: waiting for machine to come up
	I0915 07:03:39.356217   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:39.356615   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:39.356642   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:39.356579   27575 retry.go:31] will retry after 5.569722625s: waiting for machine to come up
	I0915 07:03:44.930420   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:44.930911   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has current primary IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:44.930935   26835 main.go:141] libmachine: (ha-670527-m03) Found IP for machine: 192.168.39.4
	I0915 07:03:44.930946   26835 main.go:141] libmachine: (ha-670527-m03) Reserving static IP address...
	I0915 07:03:44.931285   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find host DHCP lease matching {name: "ha-670527-m03", mac: "52:54:00:b4:8f:a3", ip: "192.168.39.4"} in network mk-ha-670527
	I0915 07:03:45.003476   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Getting to WaitForSSH function...
	I0915 07:03:45.003534   26835 main.go:141] libmachine: (ha-670527-m03) Reserved static IP address: 192.168.39.4
	I0915 07:03:45.003549   26835 main.go:141] libmachine: (ha-670527-m03) Waiting for SSH to be available...
	I0915 07:03:45.007019   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:45.007412   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527
	I0915 07:03:45.007429   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find defined IP address of network mk-ha-670527 interface with MAC address 52:54:00:b4:8f:a3
	I0915 07:03:45.007648   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Using SSH client type: external
	I0915 07:03:45.007670   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa (-rw-------)
	I0915 07:03:45.007766   26835 main.go:141] libmachine: (ha-670527-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 07:03:45.007792   26835 main.go:141] libmachine: (ha-670527-m03) DBG | About to run SSH command:
	I0915 07:03:45.007810   26835 main.go:141] libmachine: (ha-670527-m03) DBG | exit 0
	I0915 07:03:45.011926   26835 main.go:141] libmachine: (ha-670527-m03) DBG | SSH cmd err, output: exit status 255: 
	I0915 07:03:45.011947   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0915 07:03:45.011953   26835 main.go:141] libmachine: (ha-670527-m03) DBG | command : exit 0
	I0915 07:03:45.011959   26835 main.go:141] libmachine: (ha-670527-m03) DBG | err     : exit status 255
	I0915 07:03:45.011968   26835 main.go:141] libmachine: (ha-670527-m03) DBG | output  : 
	I0915 07:03:48.012738   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Getting to WaitForSSH function...
	I0915 07:03:48.015038   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.015551   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.015578   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.015738   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Using SSH client type: external
	I0915 07:03:48.015767   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa (-rw-------)
	I0915 07:03:48.015797   26835 main.go:141] libmachine: (ha-670527-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 07:03:48.015808   26835 main.go:141] libmachine: (ha-670527-m03) DBG | About to run SSH command:
	I0915 07:03:48.015816   26835 main.go:141] libmachine: (ha-670527-m03) DBG | exit 0
	I0915 07:03:48.142223   26835 main.go:141] libmachine: (ha-670527-m03) DBG | SSH cmd err, output: <nil>: 
	I0915 07:03:48.142476   26835 main.go:141] libmachine: (ha-670527-m03) KVM machine creation complete!
	I0915 07:03:48.142743   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetConfigRaw
	I0915 07:03:48.143269   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:48.143488   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:48.143647   26835 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0915 07:03:48.143661   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetState
	I0915 07:03:48.144969   26835 main.go:141] libmachine: Detecting operating system of created instance...
	I0915 07:03:48.144982   26835 main.go:141] libmachine: Waiting for SSH to be available...
	I0915 07:03:48.144987   26835 main.go:141] libmachine: Getting to WaitForSSH function...
	I0915 07:03:48.144992   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:48.147561   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.147967   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.147991   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.148171   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:48.148364   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.148516   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.148675   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:48.148859   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:03:48.149064   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0915 07:03:48.149077   26835 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0915 07:03:48.253160   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:03:48.253181   26835 main.go:141] libmachine: Detecting the provisioner...
	I0915 07:03:48.253191   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:48.255898   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.256218   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.256244   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.256420   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:48.256602   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.256718   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.256824   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:48.256964   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:03:48.257165   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0915 07:03:48.257182   26835 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0915 07:03:48.371163   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0915 07:03:48.371239   26835 main.go:141] libmachine: found compatible host: buildroot
	I0915 07:03:48.371252   26835 main.go:141] libmachine: Provisioning with buildroot...
	I0915 07:03:48.371265   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetMachineName
	I0915 07:03:48.371477   26835 buildroot.go:166] provisioning hostname "ha-670527-m03"
	I0915 07:03:48.371504   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetMachineName
	I0915 07:03:48.371749   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:48.374417   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.374809   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.374831   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.374995   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:48.375167   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.375322   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.375450   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:48.375564   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:03:48.375715   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0915 07:03:48.375728   26835 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-670527-m03 && echo "ha-670527-m03" | sudo tee /etc/hostname
	I0915 07:03:48.498262   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-670527-m03
	
	I0915 07:03:48.498289   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:48.501040   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.501426   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.501450   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.501640   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:48.501829   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.501978   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.502080   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:48.502247   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:03:48.502410   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0915 07:03:48.502424   26835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-670527-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-670527-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-670527-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:03:48.618930   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:03:48.618954   26835 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 07:03:48.618969   26835 buildroot.go:174] setting up certificates
	I0915 07:03:48.618977   26835 provision.go:84] configureAuth start
	I0915 07:03:48.618986   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetMachineName
	I0915 07:03:48.619195   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:03:48.621841   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.622193   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.622219   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.622363   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:48.624411   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.624732   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.624754   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.624889   26835 provision.go:143] copyHostCerts
	I0915 07:03:48.624917   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:03:48.624951   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 07:03:48.624960   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:03:48.625023   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 07:03:48.625088   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:03:48.625102   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 07:03:48.625106   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:03:48.625130   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 07:03:48.625168   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:03:48.625185   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 07:03:48.625191   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:03:48.625218   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 07:03:48.625265   26835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.ha-670527-m03 san=[127.0.0.1 192.168.39.4 ha-670527-m03 localhost minikube]
	I0915 07:03:48.959609   26835 provision.go:177] copyRemoteCerts
	I0915 07:03:48.959660   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:03:48.959689   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:48.962324   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.962696   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.962724   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.962853   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:48.963056   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.963218   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:48.963371   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:03:49.048634   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0915 07:03:49.048700   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:03:49.076049   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0915 07:03:49.076122   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 07:03:49.100276   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0915 07:03:49.100358   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 07:03:49.124687   26835 provision.go:87] duration metric: took 505.698463ms to configureAuth
	I0915 07:03:49.124710   26835 buildroot.go:189] setting minikube options for container-runtime
	I0915 07:03:49.124903   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:03:49.124986   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:49.127619   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.127977   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.128007   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.128285   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:49.128496   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:49.128692   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:49.128855   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:49.129024   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:03:49.129184   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0915 07:03:49.129197   26835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:03:49.365319   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:03:49.365345   26835 main.go:141] libmachine: Checking connection to Docker...
	I0915 07:03:49.365355   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetURL
	I0915 07:03:49.366513   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Using libvirt version 6000000
	I0915 07:03:49.369102   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.369512   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.369537   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.369706   26835 main.go:141] libmachine: Docker is up and running!
	I0915 07:03:49.369724   26835 main.go:141] libmachine: Reticulating splines...
	I0915 07:03:49.369730   26835 client.go:171] duration metric: took 27.540662889s to LocalClient.Create
	I0915 07:03:49.369751   26835 start.go:167] duration metric: took 27.540717616s to libmachine.API.Create "ha-670527"
	I0915 07:03:49.369760   26835 start.go:293] postStartSetup for "ha-670527-m03" (driver="kvm2")
	I0915 07:03:49.369769   26835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:03:49.369783   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:49.370010   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:03:49.370031   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:49.372171   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.372483   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.372505   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.372708   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:49.372865   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:49.372999   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:49.373120   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:03:49.456135   26835 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:03:49.460434   26835 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 07:03:49.460467   26835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 07:03:49.460531   26835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 07:03:49.460598   26835 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 07:03:49.460607   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /etc/ssl/certs/131902.pem
	I0915 07:03:49.460684   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:03:49.469981   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:03:49.494580   26835 start.go:296] duration metric: took 124.805677ms for postStartSetup
	I0915 07:03:49.494624   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetConfigRaw
	I0915 07:03:49.495201   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:03:49.498123   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.498539   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.498566   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.498844   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:03:49.499038   26835 start.go:128] duration metric: took 27.687436584s to createHost
	I0915 07:03:49.499059   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:49.501288   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.501633   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.501659   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.501794   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:49.501971   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:49.502132   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:49.502270   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:49.502513   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:03:49.502731   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0915 07:03:49.502744   26835 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 07:03:49.610848   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726383829.581740186
	
	I0915 07:03:49.610868   26835 fix.go:216] guest clock: 1726383829.581740186
	I0915 07:03:49.610875   26835 fix.go:229] Guest: 2024-09-15 07:03:49.581740186 +0000 UTC Remote: 2024-09-15 07:03:49.499048589 +0000 UTC m=+147.194414259 (delta=82.691597ms)
	I0915 07:03:49.610890   26835 fix.go:200] guest clock delta is within tolerance: 82.691597ms
	I0915 07:03:49.610895   26835 start.go:83] releasing machines lock for "ha-670527-m03", held for 27.799437777s
	I0915 07:03:49.610911   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:49.611135   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:03:49.613829   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.614359   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.614402   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.616877   26835 out.go:177] * Found network options:
	I0915 07:03:49.618062   26835 out.go:177]   - NO_PROXY=192.168.39.54,192.168.39.222
	W0915 07:03:49.619384   26835 proxy.go:119] fail to check proxy env: Error ip not in block
	W0915 07:03:49.619416   26835 proxy.go:119] fail to check proxy env: Error ip not in block
	I0915 07:03:49.619430   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:49.619926   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:49.620102   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:49.620204   26835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:03:49.620247   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	W0915 07:03:49.620273   26835 proxy.go:119] fail to check proxy env: Error ip not in block
	W0915 07:03:49.620299   26835 proxy.go:119] fail to check proxy env: Error ip not in block
	I0915 07:03:49.620353   26835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:03:49.620374   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:49.623186   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.623399   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.623598   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.623623   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.623783   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.623807   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.623810   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:49.624008   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:49.624019   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:49.624156   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:49.624207   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:49.624299   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:03:49.624383   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:49.624483   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:03:49.859888   26835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 07:03:49.866143   26835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 07:03:49.866216   26835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:03:49.883052   26835 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 07:03:49.883077   26835 start.go:495] detecting cgroup driver to use...
	I0915 07:03:49.883141   26835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:03:49.899365   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:03:49.913326   26835 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:03:49.913406   26835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:03:49.926614   26835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:03:49.940074   26835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:03:50.051904   26835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:03:50.218228   26835 docker.go:233] disabling docker service ...
	I0915 07:03:50.218298   26835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:03:50.233609   26835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:03:50.246933   26835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:03:50.363927   26835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:03:50.474597   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:03:50.488268   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:03:50.509249   26835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:03:50.509323   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:03:50.519560   26835 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:03:50.519629   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:03:50.529900   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:03:50.540024   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:03:50.550170   26835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:03:50.560551   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:03:50.570254   26835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:03:50.587171   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:03:50.597852   26835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:03:50.607246   26835 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0915 07:03:50.607294   26835 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0915 07:03:50.620908   26835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:03:50.630690   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:03:50.746640   26835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:03:50.842040   26835 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:03:50.842123   26835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:03:50.846896   26835 start.go:563] Will wait 60s for crictl version
	I0915 07:03:50.846947   26835 ssh_runner.go:195] Run: which crictl
	I0915 07:03:50.850982   26835 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:03:50.891650   26835 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 07:03:50.891739   26835 ssh_runner.go:195] Run: crio --version
	I0915 07:03:50.920635   26835 ssh_runner.go:195] Run: crio --version
	I0915 07:03:50.951253   26835 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 07:03:50.952707   26835 out.go:177]   - env NO_PROXY=192.168.39.54
	I0915 07:03:50.953929   26835 out.go:177]   - env NO_PROXY=192.168.39.54,192.168.39.222
	I0915 07:03:50.955135   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:03:50.957617   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:50.957994   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:50.958018   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:50.958224   26835 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0915 07:03:50.962558   26835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:03:50.977306   26835 mustload.go:65] Loading cluster: ha-670527
	I0915 07:03:50.977564   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:03:50.977993   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:03:50.978043   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:03:50.993661   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I0915 07:03:50.994126   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:03:50.994612   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:03:50.994634   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:03:50.994903   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:03:50.995067   26835 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:03:50.996695   26835 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:03:50.997003   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:03:50.997045   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:03:51.011921   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35015
	I0915 07:03:51.012422   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:03:51.012901   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:03:51.012917   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:03:51.013217   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:03:51.013376   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:03:51.013532   26835 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527 for IP: 192.168.39.4
	I0915 07:03:51.013544   26835 certs.go:194] generating shared ca certs ...
	I0915 07:03:51.013562   26835 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:03:51.013702   26835 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 07:03:51.013756   26835 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 07:03:51.013776   26835 certs.go:256] generating profile certs ...
	I0915 07:03:51.013897   26835 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key
	I0915 07:03:51.013928   26835 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.ebe1a222
	I0915 07:03:51.013950   26835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.ebe1a222 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.222 192.168.39.4 192.168.39.254]
	I0915 07:03:51.155977   26835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.ebe1a222 ...
	I0915 07:03:51.156004   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.ebe1a222: {Name:mk71e34c696b75e661b03e0c64f1d14a00e75c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:03:51.156167   26835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.ebe1a222 ...
	I0915 07:03:51.156178   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.ebe1a222: {Name:mk165e15c7f6cfc7c0d0b32169597c56d3e9f829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:03:51.156248   26835 certs.go:381] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.ebe1a222 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt
	I0915 07:03:51.156378   26835 certs.go:385] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.ebe1a222 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key
	I0915 07:03:51.156511   26835 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key
	I0915 07:03:51.156527   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0915 07:03:51.156541   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0915 07:03:51.156554   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0915 07:03:51.156566   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0915 07:03:51.156578   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0915 07:03:51.156588   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0915 07:03:51.156600   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0915 07:03:51.177878   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0915 07:03:51.177964   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 07:03:51.178041   26835 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 07:03:51.178053   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:03:51.178075   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:03:51.178098   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:03:51.178119   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 07:03:51.178156   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:03:51.178185   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /usr/share/ca-certificates/131902.pem
	I0915 07:03:51.178205   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:03:51.178217   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem -> /usr/share/ca-certificates/13190.pem
	I0915 07:03:51.178245   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:03:51.180867   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:03:51.181258   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:03:51.181287   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:03:51.181436   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:03:51.181641   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:03:51.181782   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:03:51.181922   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:03:51.258165   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0915 07:03:51.263307   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0915 07:03:51.285004   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0915 07:03:51.290822   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0915 07:03:51.302586   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0915 07:03:51.307017   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0915 07:03:51.317901   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0915 07:03:51.322096   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0915 07:03:51.332670   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0915 07:03:51.336604   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0915 07:03:51.352869   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0915 07:03:51.357061   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0915 07:03:51.368097   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:03:51.395494   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:03:51.420698   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:03:51.445874   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 07:03:51.470906   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0915 07:03:51.496181   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 07:03:51.522200   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 07:03:51.547820   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 07:03:51.575355   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 07:03:51.601071   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:03:51.626137   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 07:03:51.650004   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0915 07:03:51.667278   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0915 07:03:51.685002   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0915 07:03:51.702974   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0915 07:03:51.720906   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0915 07:03:51.738527   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0915 07:03:51.754706   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0915 07:03:51.774061   26835 ssh_runner.go:195] Run: openssl version
	I0915 07:03:51.779874   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:03:51.790963   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:03:51.795362   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:03:51.795416   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:03:51.801181   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:03:51.812514   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 07:03:51.825177   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 07:03:51.829834   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 07:03:51.829889   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 07:03:51.836542   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 07:03:51.849297   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 07:03:51.862096   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 07:03:51.866913   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 07:03:51.866974   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 07:03:51.873520   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:03:51.886365   26835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:03:51.890725   26835 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 07:03:51.890781   26835 kubeadm.go:934] updating node {m03 192.168.39.4 8443 v1.31.1 crio true true} ...
	I0915 07:03:51.890866   26835 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-670527-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:03:51.890893   26835 kube-vip.go:115] generating kube-vip config ...
	I0915 07:03:51.890934   26835 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0915 07:03:51.910815   26835 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0915 07:03:51.910884   26835 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0915 07:03:51.910938   26835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 07:03:51.922823   26835 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0915 07:03:51.922877   26835 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0915 07:03:51.934450   26835 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0915 07:03:51.934461   26835 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0915 07:03:51.934483   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0915 07:03:51.934494   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:03:51.934523   26835 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0915 07:03:51.934541   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0915 07:03:51.934549   26835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0915 07:03:51.934585   26835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0915 07:03:51.952258   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0915 07:03:51.952314   26835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0915 07:03:51.952348   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0915 07:03:51.952354   26835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0915 07:03:51.952392   26835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0915 07:03:51.952416   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0915 07:03:51.983634   26835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0915 07:03:51.983679   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0915 07:03:52.820714   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0915 07:03:52.831204   26835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0915 07:03:52.849837   26835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:03:52.867416   26835 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0915 07:03:52.885297   26835 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0915 07:03:52.889682   26835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:03:52.905701   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:03:53.023843   26835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:03:53.041789   26835 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:03:53.042126   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:03:53.042174   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:03:53.057077   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44299
	I0915 07:03:53.057609   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:03:53.058160   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:03:53.058185   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:03:53.058581   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:03:53.058797   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:03:53.058950   26835 start.go:317] joinCluster: &{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:03:53.059106   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0915 07:03:53.059126   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:03:53.062011   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:03:53.062410   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:03:53.062440   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:03:53.062587   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:03:53.062754   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:03:53.062900   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:03:53.063009   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:03:53.220773   26835 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:03:53.220818   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9b0rgg.sy0fprvhhqv1kkrn --discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-670527-m03 --control-plane --apiserver-advertise-address=192.168.39.4 --apiserver-bind-port=8443"
	I0915 07:04:16.954702   26835 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9b0rgg.sy0fprvhhqv1kkrn --discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-670527-m03 --control-plane --apiserver-advertise-address=192.168.39.4 --apiserver-bind-port=8443": (23.733859338s)
	I0915 07:04:16.954740   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0915 07:04:17.545275   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-670527-m03 minikube.k8s.io/updated_at=2024_09_15T07_04_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=ha-670527 minikube.k8s.io/primary=false
	I0915 07:04:17.679917   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-670527-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0915 07:04:17.811771   26835 start.go:319] duration metric: took 24.752815611s to joinCluster
	I0915 07:04:17.811839   26835 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:04:17.812342   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:04:17.813412   26835 out.go:177] * Verifying Kubernetes components...
	I0915 07:04:17.814770   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:04:18.026111   26835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:04:18.056975   26835 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:04:18.057305   26835 kapi.go:59] client config for ha-670527: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.crt", KeyFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key", CAFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0915 07:04:18.057388   26835 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.54:8443
	I0915 07:04:18.057628   26835 node_ready.go:35] waiting up to 6m0s for node "ha-670527-m03" to be "Ready" ...
	I0915 07:04:18.057709   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:18.057720   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:18.057731   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:18.057742   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:18.060730   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:18.558540   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:18.558561   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:18.558570   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:18.558575   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:18.562681   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:04:19.058745   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:19.058767   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:19.058779   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:19.058786   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:19.063064   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:04:19.557966   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:19.557986   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:19.557994   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:19.557998   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:19.561376   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:20.058793   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:20.058811   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:20.058818   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:20.058822   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:20.062366   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:20.063109   26835 node_ready.go:53] node "ha-670527-m03" has status "Ready":"False"
	I0915 07:04:20.558275   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:20.558295   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:20.558303   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:20.558307   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:20.561661   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:21.058412   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:21.058434   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:21.058448   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:21.058455   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:21.061951   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:21.558573   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:21.558595   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:21.558606   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:21.558612   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:21.562273   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:22.058212   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:22.058244   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:22.058255   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:22.058261   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:22.062180   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:22.063257   26835 node_ready.go:53] node "ha-670527-m03" has status "Ready":"False"
	I0915 07:04:22.558344   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:22.558367   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:22.558375   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:22.558378   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:22.562276   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:23.058414   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:23.058433   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:23.058446   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:23.058451   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:23.061901   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:23.557846   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:23.557871   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:23.557880   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:23.557885   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:23.561305   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:24.057956   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:24.057977   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:24.057988   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:24.057992   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:24.061821   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:24.558595   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:24.558613   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:24.558623   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:24.558627   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:24.561698   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:24.562359   26835 node_ready.go:53] node "ha-670527-m03" has status "Ready":"False"
	I0915 07:04:25.058082   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:25.058101   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:25.058108   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:25.058113   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:25.061700   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:25.558247   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:25.558268   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:25.558274   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:25.558277   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:25.561355   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:26.058402   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:26.058429   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:26.058436   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:26.058440   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:26.062379   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:26.557962   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:26.557981   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:26.557989   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:26.557993   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:26.561149   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:27.058781   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:27.058804   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:27.058815   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:27.058822   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:27.062325   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:27.063170   26835 node_ready.go:53] node "ha-670527-m03" has status "Ready":"False"
	I0915 07:04:27.558060   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:27.558084   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:27.558093   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:27.558102   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:27.562217   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:04:28.058215   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:28.058240   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:28.058253   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:28.058259   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:28.063049   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:04:28.558066   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:28.558089   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:28.558097   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:28.558102   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:28.561637   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:29.058380   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:29.058402   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:29.058411   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:29.058415   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:29.071200   26835 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0915 07:04:29.071654   26835 node_ready.go:53] node "ha-670527-m03" has status "Ready":"False"
	I0915 07:04:29.557965   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:29.557986   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:29.557994   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:29.558000   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:29.561665   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:30.057997   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:30.058014   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:30.058022   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:30.058026   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:30.061599   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:30.558552   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:30.558573   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:30.558580   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:30.558583   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:30.561981   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:31.058748   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:31.058771   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:31.058779   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:31.058785   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:31.063276   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:04:31.557993   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:31.558019   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:31.558030   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:31.558036   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:31.562539   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:04:31.563535   26835 node_ready.go:53] node "ha-670527-m03" has status "Ready":"False"
	I0915 07:04:32.058337   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:32.058358   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:32.058367   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:32.058371   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:32.061998   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:32.558690   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:32.558710   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:32.558717   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:32.558722   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:32.562446   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:33.058344   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:33.058370   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:33.058378   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:33.058382   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:33.061651   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:33.557983   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:33.558008   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:33.558018   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:33.558026   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:33.562087   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:04:34.057979   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:34.058001   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:34.058010   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:34.058016   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:34.061323   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:34.061914   26835 node_ready.go:53] node "ha-670527-m03" has status "Ready":"False"
	I0915 07:04:34.557991   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:34.558015   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:34.558026   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:34.558031   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:34.561284   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:35.058494   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:35.058512   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:35.058519   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:35.058522   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:35.061655   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:35.558484   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:35.558504   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:35.558519   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:35.558525   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:35.561562   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:36.058223   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:36.058243   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.058254   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.058259   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.061088   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.061775   26835 node_ready.go:49] node "ha-670527-m03" has status "Ready":"True"
	I0915 07:04:36.061792   26835 node_ready.go:38] duration metric: took 18.004148589s for node "ha-670527-m03" to be "Ready" ...
	I0915 07:04:36.061800   26835 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:04:36.061888   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:04:36.061899   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.061905   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.061909   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.067789   26835 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:04:36.073680   26835 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4w6x7" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.073746   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4w6x7
	I0915 07:04:36.073754   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.073761   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.073764   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.076482   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.077149   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:36.077166   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.077176   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.077186   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.079923   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.080334   26835 pod_ready.go:93] pod "coredns-7c65d6cfc9-4w6x7" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:36.080348   26835 pod_ready.go:82] duration metric: took 6.647941ms for pod "coredns-7c65d6cfc9-4w6x7" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.080356   26835 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lpj44" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.080399   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-lpj44
	I0915 07:04:36.080407   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.080413   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.080418   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.082754   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.083507   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:36.083522   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.083529   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.083533   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.085737   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.086257   26835 pod_ready.go:93] pod "coredns-7c65d6cfc9-lpj44" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:36.086273   26835 pod_ready.go:82] duration metric: took 5.912191ms for pod "coredns-7c65d6cfc9-lpj44" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.086281   26835 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.086331   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-670527
	I0915 07:04:36.086338   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.086345   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.086349   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.088849   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.089335   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:36.089346   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.089353   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.089359   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.091932   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.092398   26835 pod_ready.go:93] pod "etcd-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:36.092412   26835 pod_ready.go:82] duration metric: took 6.124711ms for pod "etcd-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.092421   26835 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.092473   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-670527-m02
	I0915 07:04:36.092482   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.092492   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.092500   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.094908   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.095587   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:36.095603   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.095614   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.095622   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.098307   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.098726   26835 pod_ready.go:93] pod "etcd-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:36.098740   26835 pod_ready.go:82] duration metric: took 6.312184ms for pod "etcd-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.098749   26835 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.258989   26835 request.go:632] Waited for 160.18431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-670527-m03
	I0915 07:04:36.259053   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-670527-m03
	I0915 07:04:36.259061   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.259068   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.259072   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.263000   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:36.458977   26835 request.go:632] Waited for 195.220619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:36.459049   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:36.459055   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.459062   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.459065   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.462070   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.462736   26835 pod_ready.go:93] pod "etcd-ha-670527-m03" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:36.462752   26835 pod_ready.go:82] duration metric: took 363.99652ms for pod "etcd-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.462775   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.658952   26835 request.go:632] Waited for 196.114758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527
	I0915 07:04:36.659017   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527
	I0915 07:04:36.659025   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.659034   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.659049   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.662171   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:36.858552   26835 request.go:632] Waited for 195.468363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:36.858603   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:36.858608   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.858614   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.858618   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.861831   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:36.862334   26835 pod_ready.go:93] pod "kube-apiserver-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:36.862360   26835 pod_ready.go:82] duration metric: took 399.566105ms for pod "kube-apiserver-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.862372   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:37.058493   26835 request.go:632] Waited for 196.021944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527-m02
	I0915 07:04:37.058545   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527-m02
	I0915 07:04:37.058550   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:37.058557   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:37.058561   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:37.061803   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:37.258993   26835 request.go:632] Waited for 196.205305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:37.259041   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:37.259046   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:37.259052   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:37.259056   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:37.262105   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:37.262674   26835 pod_ready.go:93] pod "kube-apiserver-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:37.262691   26835 pod_ready.go:82] duration metric: took 400.311953ms for pod "kube-apiserver-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:37.262700   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:37.459255   26835 request.go:632] Waited for 196.501925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527-m03
	I0915 07:04:37.459300   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527-m03
	I0915 07:04:37.459305   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:37.459316   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:37.459321   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:37.462842   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:37.659015   26835 request.go:632] Waited for 195.36074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:37.659089   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:37.659098   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:37.659110   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:37.659117   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:37.662639   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:37.663138   26835 pod_ready.go:93] pod "kube-apiserver-ha-670527-m03" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:37.663160   26835 pod_ready.go:82] duration metric: took 400.452596ms for pod "kube-apiserver-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:37.663173   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:37.858296   26835 request.go:632] Waited for 195.060423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527
	I0915 07:04:37.858391   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527
	I0915 07:04:37.858401   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:37.858411   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:37.858419   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:37.861773   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:38.058751   26835 request.go:632] Waited for 196.320861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:38.058837   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:38.058849   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:38.058860   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:38.058868   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:38.062211   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:38.062919   26835 pod_ready.go:93] pod "kube-controller-manager-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:38.062935   26835 pod_ready.go:82] duration metric: took 399.755157ms for pod "kube-controller-manager-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:38.062944   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:38.258261   26835 request.go:632] Waited for 195.259507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527-m02
	I0915 07:04:38.258319   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527-m02
	I0915 07:04:38.258324   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:38.258332   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:38.258335   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:38.261550   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:38.458682   26835 request.go:632] Waited for 196.148029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:38.458747   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:38.458753   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:38.458760   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:38.458765   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:38.461968   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:38.462530   26835 pod_ready.go:93] pod "kube-controller-manager-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:38.462553   26835 pod_ready.go:82] duration metric: took 399.602164ms for pod "kube-controller-manager-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:38.462566   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:38.658641   26835 request.go:632] Waited for 196.007932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527-m03
	I0915 07:04:38.658716   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527-m03
	I0915 07:04:38.658722   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:38.658730   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:38.658761   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:38.662366   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:38.858380   26835 request.go:632] Waited for 195.281768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:38.858432   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:38.858437   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:38.858444   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:38.858449   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:38.862305   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:38.863021   26835 pod_ready.go:93] pod "kube-controller-manager-ha-670527-m03" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:38.863036   26835 pod_ready.go:82] duration metric: took 400.460329ms for pod "kube-controller-manager-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:38.863046   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25xtk" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:39.059255   26835 request.go:632] Waited for 196.150242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25xtk
	I0915 07:04:39.059312   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25xtk
	I0915 07:04:39.059318   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:39.059325   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:39.059329   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:39.062619   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:39.258834   26835 request.go:632] Waited for 195.358373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:39.258890   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:39.258897   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:39.258907   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:39.258912   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:39.262536   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:39.263146   26835 pod_ready.go:93] pod "kube-proxy-25xtk" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:39.263163   26835 pod_ready.go:82] duration metric: took 400.111553ms for pod "kube-proxy-25xtk" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:39.263172   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kt79t" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:39.458302   26835 request.go:632] Waited for 195.0497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kt79t
	I0915 07:04:39.458353   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kt79t
	I0915 07:04:39.458358   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:39.458365   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:39.458367   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:39.461983   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:39.659280   26835 request.go:632] Waited for 196.352701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:39.659339   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:39.659344   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:39.659351   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:39.659355   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:39.662770   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:39.663296   26835 pod_ready.go:93] pod "kube-proxy-kt79t" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:39.663313   26835 pod_ready.go:82] duration metric: took 400.135176ms for pod "kube-proxy-kt79t" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:39.663322   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mbcxc" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:39.858495   26835 request.go:632] Waited for 195.117993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mbcxc
	I0915 07:04:39.858570   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mbcxc
	I0915 07:04:39.858578   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:39.858585   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:39.858589   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:39.862321   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:40.058290   26835 request.go:632] Waited for 195.193866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:40.058338   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:40.058345   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:40.058354   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:40.058362   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:40.061568   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:40.062156   26835 pod_ready.go:93] pod "kube-proxy-mbcxc" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:40.062178   26835 pod_ready.go:82] duration metric: took 398.847996ms for pod "kube-proxy-mbcxc" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:40.062190   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:40.259249   26835 request.go:632] Waited for 196.997886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527
	I0915 07:04:40.259318   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527
	I0915 07:04:40.259325   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:40.259334   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:40.259344   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:40.262824   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:40.458929   26835 request.go:632] Waited for 195.362507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:40.459002   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:40.459009   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:40.459022   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:40.459032   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:40.462065   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:40.462606   26835 pod_ready.go:93] pod "kube-scheduler-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:40.462628   26835 pod_ready.go:82] duration metric: took 400.429796ms for pod "kube-scheduler-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:40.462639   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:40.658400   26835 request.go:632] Waited for 195.699406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527-m02
	I0915 07:04:40.658490   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527-m02
	I0915 07:04:40.658501   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:40.658512   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:40.658522   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:40.661704   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:40.858883   26835 request.go:632] Waited for 196.406536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:40.858936   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:40.858941   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:40.858952   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:40.858957   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:40.862232   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:40.862827   26835 pod_ready.go:93] pod "kube-scheduler-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:40.862847   26835 pod_ready.go:82] duration metric: took 400.202103ms for pod "kube-scheduler-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:40.862857   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:41.058719   26835 request.go:632] Waited for 195.785516ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527-m03
	I0915 07:04:41.058786   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527-m03
	I0915 07:04:41.058796   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:41.058808   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:41.058818   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:41.062547   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:41.258758   26835 request.go:632] Waited for 195.355688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:41.258808   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:41.258813   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:41.258820   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:41.258825   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:41.262414   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:41.263220   26835 pod_ready.go:93] pod "kube-scheduler-ha-670527-m03" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:41.263240   26835 pod_ready.go:82] duration metric: took 400.375522ms for pod "kube-scheduler-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:41.263254   26835 pod_ready.go:39] duration metric: took 5.201426682s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:04:41.263272   26835 api_server.go:52] waiting for apiserver process to appear ...
	I0915 07:04:41.263335   26835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:04:41.285896   26835 api_server.go:72] duration metric: took 23.474016374s to wait for apiserver process to appear ...
	I0915 07:04:41.285926   26835 api_server.go:88] waiting for apiserver healthz status ...
	I0915 07:04:41.285950   26835 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0915 07:04:41.293498   26835 api_server.go:279] https://192.168.39.54:8443/healthz returned 200:
	ok
	I0915 07:04:41.293569   26835 round_trippers.go:463] GET https://192.168.39.54:8443/version
	I0915 07:04:41.293581   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:41.293591   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:41.293596   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:41.295108   26835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0915 07:04:41.295177   26835 api_server.go:141] control plane version: v1.31.1
	I0915 07:04:41.295192   26835 api_server.go:131] duration metric: took 9.260179ms to wait for apiserver health ...
	I0915 07:04:41.295199   26835 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 07:04:41.458590   26835 request.go:632] Waited for 163.32786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:04:41.458650   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:04:41.458655   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:41.458661   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:41.458665   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:41.464692   26835 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:04:41.471606   26835 system_pods.go:59] 24 kube-system pods found
	I0915 07:04:41.471631   26835 system_pods.go:61] "coredns-7c65d6cfc9-4w6x7" [b61b0aa7-48e9-4746-b2e9-d205b96fe557] Running
	I0915 07:04:41.471635   26835 system_pods.go:61] "coredns-7c65d6cfc9-lpj44" [a4a8f34c-c73f-411b-9773-18e274a3987f] Running
	I0915 07:04:41.471639   26835 system_pods.go:61] "etcd-ha-670527" [d7fd260a-bb00-4f30-8e27-ae79ab568428] Running
	I0915 07:04:41.471642   26835 system_pods.go:61] "etcd-ha-670527-m02" [91839d6d-2280-4850-bc47-0de42a8bd3ee] Running
	I0915 07:04:41.471646   26835 system_pods.go:61] "etcd-ha-670527-m03" [dfd469fc-8e59-49af-bc8e-6da438608405] Running
	I0915 07:04:41.471649   26835 system_pods.go:61] "kindnet-6sqhd" [8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2] Running
	I0915 07:04:41.471652   26835 system_pods.go:61] "kindnet-fcgbj" [39fe5d8d-e647-4133-80ba-24e9b4781c8e] Running
	I0915 07:04:41.471657   26835 system_pods.go:61] "kindnet-mn54b" [c413e4cd-9033-4f8d-ac98-5a641b14fe78] Running
	I0915 07:04:41.471659   26835 system_pods.go:61] "kube-apiserver-ha-670527" [2da91baa-de79-4304-9256-45771efa0825] Running
	I0915 07:04:41.471662   26835 system_pods.go:61] "kube-apiserver-ha-670527-m02" [406bb0a9-8e75-41c9-8f88-10d10b8fb327] Running
	I0915 07:04:41.471665   26835 system_pods.go:61] "kube-apiserver-ha-670527-m03" [e7ba2773-71e2-409f-82c7-c205f7126edd] Running
	I0915 07:04:41.471668   26835 system_pods.go:61] "kube-controller-manager-ha-670527" [aa981100-fd20-40e8-8449-b4332efc086d] Running
	I0915 07:04:41.471671   26835 system_pods.go:61] "kube-controller-manager-ha-670527-m02" [0e833c15-24c8-4a35-8c4e-58fe1eaa6600] Running
	I0915 07:04:41.471674   26835 system_pods.go:61] "kube-controller-manager-ha-670527-m03" [c260fc3a-bfcb-4457-9f92-6ddcd633d30d] Running
	I0915 07:04:41.471677   26835 system_pods.go:61] "kube-proxy-25xtk" [c9955046-49ba-426d-9377-8d3e02fd3f37] Running
	I0915 07:04:41.471680   26835 system_pods.go:61] "kube-proxy-kt79t" [9ae503da-976f-4f63-9a70-c1899bb990e7] Running
	I0915 07:04:41.471684   26835 system_pods.go:61] "kube-proxy-mbcxc" [bb5a9c97-bdc1-4346-b2cb-117e1e2d7fce] Running
	I0915 07:04:41.471689   26835 system_pods.go:61] "kube-scheduler-ha-670527" [085277d2-c1ce-4a47-9b73-47961e3d13d9] Running
	I0915 07:04:41.471692   26835 system_pods.go:61] "kube-scheduler-ha-670527-m02" [a88ee5e5-13cb-4160-b654-0af177d55cd5] Running
	I0915 07:04:41.471695   26835 system_pods.go:61] "kube-scheduler-ha-670527-m03" [d6ccae33-5434-4de4-a1d9-447fe01e5c54] Running
	I0915 07:04:41.471700   26835 system_pods.go:61] "kube-vip-ha-670527" [3ad87a12-7eca-44cb-8b2f-df38f92d8e4d] Running
	I0915 07:04:41.471703   26835 system_pods.go:61] "kube-vip-ha-670527-m02" [c02df8e9-056b-4028-9af5-1c4b8e42e780] Running
	I0915 07:04:41.471706   26835 system_pods.go:61] "kube-vip-ha-670527-m03" [c1cfdeee-1f16-4bdc-96a7-81e5863a9146] Running
	I0915 07:04:41.471708   26835 system_pods.go:61] "storage-provisioner" [62afc380-282c-4392-9ff9-7531ab5e74d1] Running
	I0915 07:04:41.471713   26835 system_pods.go:74] duration metric: took 176.510038ms to wait for pod list to return data ...
	I0915 07:04:41.471723   26835 default_sa.go:34] waiting for default service account to be created ...
	I0915 07:04:41.658983   26835 request.go:632] Waited for 187.197567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0915 07:04:41.659035   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0915 07:04:41.659040   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:41.659047   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:41.659051   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:41.664931   26835 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:04:41.665064   26835 default_sa.go:45] found service account: "default"
	I0915 07:04:41.665081   26835 default_sa.go:55] duration metric: took 193.352918ms for default service account to be created ...
	I0915 07:04:41.665089   26835 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 07:04:41.858799   26835 request.go:632] Waited for 193.620407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:04:41.858852   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:04:41.858857   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:41.858865   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:41.858869   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:41.865398   26835 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:04:41.873062   26835 system_pods.go:86] 24 kube-system pods found
	I0915 07:04:41.873110   26835 system_pods.go:89] "coredns-7c65d6cfc9-4w6x7" [b61b0aa7-48e9-4746-b2e9-d205b96fe557] Running
	I0915 07:04:41.873128   26835 system_pods.go:89] "coredns-7c65d6cfc9-lpj44" [a4a8f34c-c73f-411b-9773-18e274a3987f] Running
	I0915 07:04:41.873135   26835 system_pods.go:89] "etcd-ha-670527" [d7fd260a-bb00-4f30-8e27-ae79ab568428] Running
	I0915 07:04:41.873141   26835 system_pods.go:89] "etcd-ha-670527-m02" [91839d6d-2280-4850-bc47-0de42a8bd3ee] Running
	I0915 07:04:41.873146   26835 system_pods.go:89] "etcd-ha-670527-m03" [dfd469fc-8e59-49af-bc8e-6da438608405] Running
	I0915 07:04:41.873151   26835 system_pods.go:89] "kindnet-6sqhd" [8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2] Running
	I0915 07:04:41.873157   26835 system_pods.go:89] "kindnet-fcgbj" [39fe5d8d-e647-4133-80ba-24e9b4781c8e] Running
	I0915 07:04:41.873165   26835 system_pods.go:89] "kindnet-mn54b" [c413e4cd-9033-4f8d-ac98-5a641b14fe78] Running
	I0915 07:04:41.873172   26835 system_pods.go:89] "kube-apiserver-ha-670527" [2da91baa-de79-4304-9256-45771efa0825] Running
	I0915 07:04:41.873180   26835 system_pods.go:89] "kube-apiserver-ha-670527-m02" [406bb0a9-8e75-41c9-8f88-10d10b8fb327] Running
	I0915 07:04:41.873187   26835 system_pods.go:89] "kube-apiserver-ha-670527-m03" [e7ba2773-71e2-409f-82c7-c205f7126edd] Running
	I0915 07:04:41.873200   26835 system_pods.go:89] "kube-controller-manager-ha-670527" [aa981100-fd20-40e8-8449-b4332efc086d] Running
	I0915 07:04:41.873208   26835 system_pods.go:89] "kube-controller-manager-ha-670527-m02" [0e833c15-24c8-4a35-8c4e-58fe1eaa6600] Running
	I0915 07:04:41.873215   26835 system_pods.go:89] "kube-controller-manager-ha-670527-m03" [c260fc3a-bfcb-4457-9f92-6ddcd633d30d] Running
	I0915 07:04:41.873223   26835 system_pods.go:89] "kube-proxy-25xtk" [c9955046-49ba-426d-9377-8d3e02fd3f37] Running
	I0915 07:04:41.873227   26835 system_pods.go:89] "kube-proxy-kt79t" [9ae503da-976f-4f63-9a70-c1899bb990e7] Running
	I0915 07:04:41.873236   26835 system_pods.go:89] "kube-proxy-mbcxc" [bb5a9c97-bdc1-4346-b2cb-117e1e2d7fce] Running
	I0915 07:04:41.873242   26835 system_pods.go:89] "kube-scheduler-ha-670527" [085277d2-c1ce-4a47-9b73-47961e3d13d9] Running
	I0915 07:04:41.873251   26835 system_pods.go:89] "kube-scheduler-ha-670527-m02" [a88ee5e5-13cb-4160-b654-0af177d55cd5] Running
	I0915 07:04:41.873256   26835 system_pods.go:89] "kube-scheduler-ha-670527-m03" [d6ccae33-5434-4de4-a1d9-447fe01e5c54] Running
	I0915 07:04:41.873264   26835 system_pods.go:89] "kube-vip-ha-670527" [3ad87a12-7eca-44cb-8b2f-df38f92d8e4d] Running
	I0915 07:04:41.873269   26835 system_pods.go:89] "kube-vip-ha-670527-m02" [c02df8e9-056b-4028-9af5-1c4b8e42e780] Running
	I0915 07:04:41.873274   26835 system_pods.go:89] "kube-vip-ha-670527-m03" [c1cfdeee-1f16-4bdc-96a7-81e5863a9146] Running
	I0915 07:04:41.873282   26835 system_pods.go:89] "storage-provisioner" [62afc380-282c-4392-9ff9-7531ab5e74d1] Running
	I0915 07:04:41.873291   26835 system_pods.go:126] duration metric: took 208.195329ms to wait for k8s-apps to be running ...
	I0915 07:04:41.873303   26835 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 07:04:41.873353   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:04:41.893596   26835 system_svc.go:56] duration metric: took 20.281709ms WaitForService to wait for kubelet
	I0915 07:04:41.893634   26835 kubeadm.go:582] duration metric: took 24.081760048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:04:41.893657   26835 node_conditions.go:102] verifying NodePressure condition ...
	I0915 07:04:42.058985   26835 request.go:632] Waited for 165.250049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes
	I0915 07:04:42.059043   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes
	I0915 07:04:42.059060   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:42.059067   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:42.059073   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:42.062924   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:42.063813   26835 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 07:04:42.063834   26835 node_conditions.go:123] node cpu capacity is 2
	I0915 07:04:42.063846   26835 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 07:04:42.063851   26835 node_conditions.go:123] node cpu capacity is 2
	I0915 07:04:42.063858   26835 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 07:04:42.063863   26835 node_conditions.go:123] node cpu capacity is 2
	I0915 07:04:42.063871   26835 node_conditions.go:105] duration metric: took 170.208899ms to run NodePressure ...
	I0915 07:04:42.063885   26835 start.go:241] waiting for startup goroutines ...
	I0915 07:04:42.063905   26835 start.go:255] writing updated cluster config ...
	I0915 07:04:42.064189   26835 ssh_runner.go:195] Run: rm -f paused
	I0915 07:04:42.117372   26835 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 07:04:42.119782   26835 out.go:177] * Done! kubectl is now configured to use "ha-670527" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.325879316Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384103325858545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e536ef9a-dce3-428f-8248-41dc04baf5c7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.326282783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b20602b-6a8e-492d-88d1-bb2ceb688e84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.326338896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b20602b-6a8e-492d-88d1-bb2ceb688e84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.326578188Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726383887395668421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740121317148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740060564989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606b9d6854130ea502e808cad6f11dd661e42dfaa3855c06ce4f49464137e7b5,PodSandboxId:d18c8f0f6f2f1b805b69f6ec62bf6c54531bf7d357002cde43172c70985937b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726383739985517431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17263837
27859378938,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726383727654078298,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5733f96a0b004d9b54b8a20e477ff1601388cce8a8dba41a6a696e613170f071,PodSandboxId:ad627ce8b936b4fceceb3a24712834305cfebc12fb66451a407039033c7a5687,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726383719461240461,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95538eea8eb32a40ca4ee9e8976fc434,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726383716294916549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcaf162e8fd08bf60d83f84f21cffe2d324498935459eb640d6286dfd00874cf,PodSandboxId:24baf0c9e05eeccd5ec56896a5c84a6bc3f29e1a9faa66abf955215c508b76a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726383716318590486,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e8e75a7001798077598504da25ac6847fd389b64abaec0551104db70b588b6,PodSandboxId:aa06f2e231607ae07276d54d579c7f3306415de82bdc7bb612fbde5a1f7a7cec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726383716275806788,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726383716223526552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b20602b-6a8e-492d-88d1-bb2ceb688e84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.364822964Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4205c5f9-3952-40c9-a570-e61193b713b6 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.364898000Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4205c5f9-3952-40c9-a570-e61193b713b6 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.366231670Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9094c616-5236-4c8d-b914-a8a8015ff2b8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.367021839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384103366997195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9094c616-5236-4c8d-b914-a8a8015ff2b8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.367731312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1f755c2-0ba5-4bc2-a121-a5e0fbb27d2c name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.367792434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1f755c2-0ba5-4bc2-a121-a5e0fbb27d2c name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.368188326Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726383887395668421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740121317148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740060564989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606b9d6854130ea502e808cad6f11dd661e42dfaa3855c06ce4f49464137e7b5,PodSandboxId:d18c8f0f6f2f1b805b69f6ec62bf6c54531bf7d357002cde43172c70985937b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726383739985517431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17263837
27859378938,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726383727654078298,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5733f96a0b004d9b54b8a20e477ff1601388cce8a8dba41a6a696e613170f071,PodSandboxId:ad627ce8b936b4fceceb3a24712834305cfebc12fb66451a407039033c7a5687,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726383719461240461,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95538eea8eb32a40ca4ee9e8976fc434,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726383716294916549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcaf162e8fd08bf60d83f84f21cffe2d324498935459eb640d6286dfd00874cf,PodSandboxId:24baf0c9e05eeccd5ec56896a5c84a6bc3f29e1a9faa66abf955215c508b76a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726383716318590486,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e8e75a7001798077598504da25ac6847fd389b64abaec0551104db70b588b6,PodSandboxId:aa06f2e231607ae07276d54d579c7f3306415de82bdc7bb612fbde5a1f7a7cec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726383716275806788,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726383716223526552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1f755c2-0ba5-4bc2-a121-a5e0fbb27d2c name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.407653448Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=509f47ed-b668-4124-bdcb-afb5b519dbe8 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.407735818Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=509f47ed-b668-4124-bdcb-afb5b519dbe8 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.408606138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35532ec3-7087-4cf0-8027-434136cadedd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.409023010Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384103409000038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35532ec3-7087-4cf0-8027-434136cadedd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.409685141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4671bece-c19a-44b5-8d7d-a7097a9f771e name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.409756303Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4671bece-c19a-44b5-8d7d-a7097a9f771e name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.409981285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726383887395668421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740121317148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740060564989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606b9d6854130ea502e808cad6f11dd661e42dfaa3855c06ce4f49464137e7b5,PodSandboxId:d18c8f0f6f2f1b805b69f6ec62bf6c54531bf7d357002cde43172c70985937b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726383739985517431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17263837
27859378938,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726383727654078298,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5733f96a0b004d9b54b8a20e477ff1601388cce8a8dba41a6a696e613170f071,PodSandboxId:ad627ce8b936b4fceceb3a24712834305cfebc12fb66451a407039033c7a5687,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726383719461240461,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95538eea8eb32a40ca4ee9e8976fc434,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726383716294916549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcaf162e8fd08bf60d83f84f21cffe2d324498935459eb640d6286dfd00874cf,PodSandboxId:24baf0c9e05eeccd5ec56896a5c84a6bc3f29e1a9faa66abf955215c508b76a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726383716318590486,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e8e75a7001798077598504da25ac6847fd389b64abaec0551104db70b588b6,PodSandboxId:aa06f2e231607ae07276d54d579c7f3306415de82bdc7bb612fbde5a1f7a7cec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726383716275806788,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726383716223526552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4671bece-c19a-44b5-8d7d-a7097a9f771e name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.450275468Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e6479e70-3ec2-48b2-b888-0eeb7bdaaf7b name=/runtime.v1.RuntimeService/Version
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.450360943Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6479e70-3ec2-48b2-b888-0eeb7bdaaf7b name=/runtime.v1.RuntimeService/Version
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.451493371Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94c81991-7f6b-4849-9ebc-f5b95bc9a421 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.451909747Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384103451887423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94c81991-7f6b-4849-9ebc-f5b95bc9a421 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.452503367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f12e070-b090-4c64-8fa6-b47e3756818e name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.452581255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f12e070-b090-4c64-8fa6-b47e3756818e name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:08:23 ha-670527 crio[665]: time="2024-09-15 07:08:23.452796014Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726383887395668421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740121317148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740060564989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606b9d6854130ea502e808cad6f11dd661e42dfaa3855c06ce4f49464137e7b5,PodSandboxId:d18c8f0f6f2f1b805b69f6ec62bf6c54531bf7d357002cde43172c70985937b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726383739985517431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17263837
27859378938,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726383727654078298,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5733f96a0b004d9b54b8a20e477ff1601388cce8a8dba41a6a696e613170f071,PodSandboxId:ad627ce8b936b4fceceb3a24712834305cfebc12fb66451a407039033c7a5687,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726383719461240461,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95538eea8eb32a40ca4ee9e8976fc434,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726383716294916549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcaf162e8fd08bf60d83f84f21cffe2d324498935459eb640d6286dfd00874cf,PodSandboxId:24baf0c9e05eeccd5ec56896a5c84a6bc3f29e1a9faa66abf955215c508b76a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726383716318590486,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e8e75a7001798077598504da25ac6847fd389b64abaec0551104db70b588b6,PodSandboxId:aa06f2e231607ae07276d54d579c7f3306415de82bdc7bb612fbde5a1f7a7cec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726383716275806788,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726383716223526552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f12e070-b090-4c64-8fa6-b47e3756818e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1d6d31c8606ff       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   ee2d1970f1e78       busybox-7dff88458-rvbkj
	fde41666d8c29       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   f7b4d1299c815       coredns-7c65d6cfc9-lpj44
	489cc4a0fb63e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   6f3bebb3d80d8       coredns-7c65d6cfc9-4w6x7
	606b9d6854130       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   d18c8f0f6f2f1       storage-provisioner
	aa6d2372c6ae3       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   843991b56a260       kindnet-6sqhd
	b75dfe3b6121c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   594c62a0375e6       kube-proxy-25xtk
	5733f96a0b004       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   ad627ce8b936b       kube-vip-ha-670527
	bcaf162e8fd08       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   24baf0c9e05ee       kube-controller-manager-ha-670527
	bbb55bff5eb6c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   6e7b02c328479       etcd-ha-670527
	f3e8e75a70017       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   aa06f2e231607       kube-apiserver-ha-670527
	e3475f73ce55b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   58967292ecf37       kube-scheduler-ha-670527
	
	
	==> coredns [489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f] <==
	[INFO] 10.244.0.4:45235 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000089564s
	[INFO] 10.244.0.4:51521 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001979834s
	[INFO] 10.244.2.2:37125 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000203403s
	[INFO] 10.244.2.2:50161 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165008s
	[INFO] 10.244.2.2:56879 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.01294413s
	[INFO] 10.244.2.2:45083 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125708s
	[INFO] 10.244.1.2:52633 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197932s
	[INFO] 10.244.1.2:50573 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001770125s
	[INFO] 10.244.1.2:35701 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180854s
	[INFO] 10.244.1.2:41389 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132037s
	[INFO] 10.244.1.2:58202 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183842s
	[INFO] 10.244.1.2:49817 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109611s
	[INFO] 10.244.0.4:52793 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113159s
	[INFO] 10.244.0.4:38656 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106719s
	[INFO] 10.244.0.4:38122 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061858s
	[INFO] 10.244.2.2:46127 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114243s
	[INFO] 10.244.1.2:54602 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101327s
	[INFO] 10.244.1.2:55582 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124623s
	[INFO] 10.244.0.4:55917 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104871s
	[INFO] 10.244.2.2:41069 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001913s
	[INFO] 10.244.1.2:58958 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140612s
	[INFO] 10.244.1.2:39608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00015189s
	[INFO] 10.244.1.2:40627 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154411s
	[INFO] 10.244.0.4:53377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121746s
	[INFO] 10.244.0.4:52578 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089133s
	
	
	==> coredns [fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4] <==
	[INFO] 10.244.2.2:43257 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151941s
	[INFO] 10.244.2.2:33629 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003973916s
	[INFO] 10.244.2.2:33194 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170028s
	[INFO] 10.244.2.2:40376 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000180655s
	[INFO] 10.244.1.2:52585 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001449553s
	[INFO] 10.244.1.2:53060 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000208928s
	[INFO] 10.244.0.4:56755 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001727309s
	[INFO] 10.244.0.4:60825 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000234694s
	[INFO] 10.244.0.4:58873 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001046398s
	[INFO] 10.244.0.4:42322 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104256s
	[INFO] 10.244.0.4:34109 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000038552s
	[INFO] 10.244.2.2:60809 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124458s
	[INFO] 10.244.2.2:36825 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093407s
	[INFO] 10.244.2.2:56100 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075616s
	[INFO] 10.244.1.2:47124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122782s
	[INFO] 10.244.1.2:55965 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096943s
	[INFO] 10.244.0.4:34915 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120044s
	[INFO] 10.244.0.4:43696 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073334s
	[INFO] 10.244.0.4:59415 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158827s
	[INFO] 10.244.2.2:35148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177137s
	[INFO] 10.244.2.2:58466 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166358s
	[INFO] 10.244.2.2:60740 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000304437s
	[INFO] 10.244.1.2:54984 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149622s
	[INFO] 10.244.0.4:44476 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075065s
	[INFO] 10.244.0.4:37204 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054807s
	
	
	==> describe nodes <==
	Name:               ha-670527
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T07_02_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:02:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:08:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:05:05 +0000   Sun, 15 Sep 2024 07:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:05:05 +0000   Sun, 15 Sep 2024 07:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:05:05 +0000   Sun, 15 Sep 2024 07:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:05:05 +0000   Sun, 15 Sep 2024 07:02:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    ha-670527
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4352c21da1154e49b4f2cd8223ef4f22
	  System UUID:                4352c21d-a115-4e49-b4f2-cd8223ef4f22
	  Boot ID:                    28f13bdf-c0fc-4804-9eaa-c62790060557
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rvbkj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 coredns-7c65d6cfc9-4w6x7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m16s
	  kube-system                 coredns-7c65d6cfc9-lpj44             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m16s
	  kube-system                 etcd-ha-670527                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m22s
	  kube-system                 kindnet-6sqhd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m16s
	  kube-system                 kube-apiserver-ha-670527             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-controller-manager-ha-670527    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-proxy-25xtk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-scheduler-ha-670527             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-vip-ha-670527                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m15s  kube-proxy       
	  Normal  Starting                 6m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m21s  kubelet          Node ha-670527 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s  kubelet          Node ha-670527 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s  kubelet          Node ha-670527 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m17s  node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	  Normal  NodeReady                6m4s   kubelet          Node ha-670527 status is now: NodeReady
	  Normal  RegisteredNode           5m20s  node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	  Normal  RegisteredNode           4m1s   node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	
	
	Name:               ha-670527-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_02_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:02:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:05:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 15 Sep 2024 07:04:56 +0000   Sun, 15 Sep 2024 07:06:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 15 Sep 2024 07:04:56 +0000   Sun, 15 Sep 2024 07:06:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 15 Sep 2024 07:04:56 +0000   Sun, 15 Sep 2024 07:06:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 15 Sep 2024 07:04:56 +0000   Sun, 15 Sep 2024 07:06:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-670527-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 937badb420fd46bab8c9040c7d7b213d
	  System UUID:                937badb4-20fd-46ba-b8c9-040c7d7b213d
	  Boot ID:                    12bb372b-3155-48ac-9bc2-c620b0e7b549
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gxwp9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-670527-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m28s
	  kube-system                 kindnet-mn54b                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m29s
	  kube-system                 kube-apiserver-ha-670527-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-controller-manager-ha-670527-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-proxy-kt79t                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-scheduler-ha-670527-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-vip-ha-670527-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m25s                  kube-proxy       
	  Normal  Starting                 5m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m29s (x3 over 5m29s)  kubelet          Node ha-670527-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m29s (x3 over 5m29s)  kubelet          Node ha-670527-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m29s (x3 over 5m29s)  kubelet          Node ha-670527-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  NodeReady                5m6s                   kubelet          Node ha-670527-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  NodeNotReady             102s                   node-controller  Node ha-670527-m02 status is now: NodeNotReady
	
	
	Name:               ha-670527-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_04_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:04:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:08:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:05:14 +0000   Sun, 15 Sep 2024 07:04:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:05:14 +0000   Sun, 15 Sep 2024 07:04:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:05:14 +0000   Sun, 15 Sep 2024 07:04:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:05:14 +0000   Sun, 15 Sep 2024 07:04:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-670527-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 16b4217ec868437981a046051de1bf49
	  System UUID:                16b4217e-c868-4379-81a0-46051de1bf49
	  Boot ID:                    8cbc44d2-ec4f-4f77-b000-fd28fe127c0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4cgxn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-670527-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m9s
	  kube-system                 kindnet-fcgbj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m10s
	  kube-system                 kube-apiserver-ha-670527-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-controller-manager-ha-670527-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-proxy-mbcxc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-scheduler-ha-670527-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-vip-ha-670527-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node ha-670527-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node ha-670527-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node ha-670527-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-670527-m03 event: Registered Node ha-670527-m03 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-670527-m03 event: Registered Node ha-670527-m03 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-670527-m03 event: Registered Node ha-670527-m03 in Controller
	
	
	Name:               ha-670527-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_05_24_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:05:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:08:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:05:53 +0000   Sun, 15 Sep 2024 07:05:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:05:53 +0000   Sun, 15 Sep 2024 07:05:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:05:53 +0000   Sun, 15 Sep 2024 07:05:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:05:53 +0000   Sun, 15 Sep 2024 07:05:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-670527-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 24d136e447f34c399b15050eaf7b094c
	  System UUID:                24d136e4-47f3-4c39-9b15-050eaf7b094c
	  Boot ID:                    3d95536b-6e73-40d6-9bd4-2fc71b1a73bc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4l8cf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-fq2lt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m55s              kube-proxy       
	  Normal  RegisteredNode           3m                 node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m1s)  kubelet          Node ha-670527-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m1s)  kubelet          Node ha-670527-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m1s)  kubelet          Node ha-670527-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m57s              node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal  RegisteredNode           2m56s              node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal  NodeReady                2m40s              kubelet          Node ha-670527-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep15 07:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050166] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041549] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.806616] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.438146] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.580292] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.438155] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.055236] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053736] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.163785] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.149779] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.293443] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.937975] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.762754] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.062847] kauditd_printk_skb: 158 callbacks suppressed
	[Sep15 07:02] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.107092] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.313948] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.323190] kauditd_printk_skb: 38 callbacks suppressed
	[Sep15 07:03] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b] <==
	{"level":"warn","ts":"2024-09-15T07:08:23.715630Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.723448Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.728222Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.742485Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.753102Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.761258Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.764969Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.769224Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.776763Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.784870Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.787095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.791985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.796333Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.799521Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.812632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.819886Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.826101Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.831372Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.835660Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.841672Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.848805Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.856708Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.866861Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.868862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:08:23.887989Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 07:08:23 up 6 min,  0 users,  load average: 0.04, 0.19, 0.12
	Linux ha-670527 5.10.207 #1 SMP Sun Sep 15 04:48:27 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230] <==
	I0915 07:07:49.124232       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:07:59.125242       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:07:59.125355       1 main.go:299] handling current node
	I0915 07:07:59.125394       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:07:59.125412       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:07:59.125566       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:07:59.125588       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:07:59.125651       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:07:59.125684       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:08:09.119269       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:08:09.119416       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:08:09.119674       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:08:09.119734       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:08:09.119841       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:08:09.119894       1 main.go:299] handling current node
	I0915 07:08:09.119967       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:08:09.120008       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:08:19.128230       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:08:19.128494       1 main.go:299] handling current node
	I0915 07:08:19.128539       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:08:19.128558       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:08:19.128743       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:08:19.128765       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:08:19.128820       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:08:19.128838       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f3e8e75a7001798077598504da25ac6847fd389b64abaec0551104db70b588b6] <==
	I0915 07:02:01.340494       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0915 07:02:02.506665       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0915 07:02:02.519606       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0915 07:02:02.541694       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0915 07:02:06.741032       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0915 07:02:07.100027       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0915 07:04:49.287010       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51456: use of closed network connection
	E0915 07:04:49.478785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51488: use of closed network connection
	E0915 07:04:49.665458       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51508: use of closed network connection
	E0915 07:04:49.869558       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51518: use of closed network connection
	E0915 07:04:50.058020       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51528: use of closed network connection
	E0915 07:04:50.244603       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51540: use of closed network connection
	E0915 07:04:50.418408       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51552: use of closed network connection
	E0915 07:04:50.596359       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51574: use of closed network connection
	E0915 07:04:50.783116       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51590: use of closed network connection
	E0915 07:04:51.068899       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51614: use of closed network connection
	E0915 07:04:51.440856       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51656: use of closed network connection
	E0915 07:04:51.623756       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51666: use of closed network connection
	E0915 07:04:51.810349       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51690: use of closed network connection
	E0915 07:04:52.035620       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51714: use of closed network connection
	E0915 07:05:23.709346       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0915 07:05:23.711222       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0915 07:05:23.712625       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0915 07:05:23.713864       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0915 07:05:23.715392       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.748303ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-670527-m04" result=null
	
	
	==> kube-controller-manager [bcaf162e8fd08bf60d83f84f21cffe2d324498935459eb640d6286dfd00874cf] <==
	I0915 07:05:23.158787       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-670527-m04" podCIDRs=["10.244.3.0/24"]
	I0915 07:05:23.158856       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:23.158884       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:23.167230       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:24.122231       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:24.159118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:24.552959       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:26.307659       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:26.308934       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-670527-m04"
	I0915 07:05:26.365352       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:27.733420       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:27.844709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:33.193577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:43.135820       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:43.136569       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-670527-m04"
	I0915 07:05:43.155183       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:43.653863       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:53.513402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:06:41.342481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m02"
	I0915 07:06:41.342958       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-670527-m04"
	I0915 07:06:41.362298       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m02"
	I0915 07:06:41.510345       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.028922ms"
	I0915 07:06:41.510751       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="127.963µs"
	I0915 07:06:42.823669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m02"
	I0915 07:06:46.588018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m02"
	
	
	==> kube-proxy [b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 07:02:08.079904       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 07:02:08.096851       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.54"]
	E0915 07:02:08.096998       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 07:02:08.138602       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 07:02:08.138741       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 07:02:08.138784       1 server_linux.go:169] "Using iptables Proxier"
	I0915 07:02:08.143584       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 07:02:08.144421       1 server.go:483] "Version info" version="v1.31.1"
	I0915 07:02:08.144550       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:02:08.147853       1 config.go:199] "Starting service config controller"
	I0915 07:02:08.148197       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 07:02:08.148411       1 config.go:105] "Starting endpoint slice config controller"
	I0915 07:02:08.148448       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 07:02:08.149835       1 config.go:328] "Starting node config controller"
	I0915 07:02:08.152553       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 07:02:08.249198       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 07:02:08.249264       1 shared_informer.go:320] Caches are synced for service config
	I0915 07:02:08.255066       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d] <==
	W0915 07:02:00.534536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 07:02:00.534589       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:02:00.597420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 07:02:00.597899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:02:00.604546       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 07:02:00.604596       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0915 07:02:00.622867       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 07:02:00.622918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:02:00.665454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 07:02:00.665506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:02:00.745175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 07:02:00.745309       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 07:02:00.757869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 07:02:00.757923       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 07:02:00.771782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 07:02:00.771947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0915 07:02:03.481353       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0915 07:04:42.984344       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gxwp9\": pod busybox-7dff88458-gxwp9 is already assigned to node \"ha-670527-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-gxwp9" node="ha-670527-m02"
	E0915 07:04:42.984530       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5fc959e1-a77e-415a-bbea-3dd4303e82d9(default/busybox-7dff88458-gxwp9) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-gxwp9"
	E0915 07:04:42.984580       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gxwp9\": pod busybox-7dff88458-gxwp9 is already assigned to node \"ha-670527-m02\"" pod="default/busybox-7dff88458-gxwp9"
	I0915 07:04:42.984652       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-gxwp9" node="ha-670527-m02"
	E0915 07:05:23.207787       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fq2lt\": pod kube-proxy-fq2lt is already assigned to node \"ha-670527-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fq2lt" node="ha-670527-m04"
	E0915 07:05:23.207903       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 50b6a6aa-70b7-41b5-9554-5fef223d25a4(kube-system/kube-proxy-fq2lt) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fq2lt"
	E0915 07:05:23.207927       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fq2lt\": pod kube-proxy-fq2lt is already assigned to node \"ha-670527-m04\"" pod="kube-system/kube-proxy-fq2lt"
	I0915 07:05:23.207964       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fq2lt" node="ha-670527-m04"
	
	
	==> kubelet <==
	Sep 15 07:07:02 ha-670527 kubelet[1303]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 15 07:07:02 ha-670527 kubelet[1303]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 15 07:07:02 ha-670527 kubelet[1303]: E0915 07:07:02.612758    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384022612025469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:07:02 ha-670527 kubelet[1303]: E0915 07:07:02.612868    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384022612025469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:07:12 ha-670527 kubelet[1303]: E0915 07:07:12.614519    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384032614111077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:07:12 ha-670527 kubelet[1303]: E0915 07:07:12.614775    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384032614111077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:07:22 ha-670527 kubelet[1303]: E0915 07:07:22.618008    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384042617327584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:07:22 ha-670527 kubelet[1303]: E0915 07:07:22.618052    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384042617327584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:07:32 ha-670527 kubelet[1303]: E0915 07:07:32.620800    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384052620049751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:07:32 ha-670527 kubelet[1303]: E0915 07:07:32.621601    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384052620049751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:07:42 ha-670527 kubelet[1303]: E0915 07:07:42.623934    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384062622977188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:07:42 ha-670527 kubelet[1303]: E0915 07:07:42.624274    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384062622977188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:07:52 ha-670527 kubelet[1303]: E0915 07:07:52.626010    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384072625372300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:07:52 ha-670527 kubelet[1303]: E0915 07:07:52.626098    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384072625372300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:02 ha-670527 kubelet[1303]: E0915 07:08:02.493699    1303 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 15 07:08:02 ha-670527 kubelet[1303]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 15 07:08:02 ha-670527 kubelet[1303]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 15 07:08:02 ha-670527 kubelet[1303]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 15 07:08:02 ha-670527 kubelet[1303]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 15 07:08:02 ha-670527 kubelet[1303]: E0915 07:08:02.631878    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384082631207727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:02 ha-670527 kubelet[1303]: E0915 07:08:02.631927    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384082631207727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:12 ha-670527 kubelet[1303]: E0915 07:08:12.632966    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384092632722443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:12 ha-670527 kubelet[1303]: E0915 07:08:12.633008    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384092632722443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:22 ha-670527 kubelet[1303]: E0915 07:08:22.635672    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384102634358572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:22 ha-670527 kubelet[1303]: E0915 07:08:22.636618    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384102634358572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-670527 -n ha-670527
helpers_test.go:261: (dbg) Run:  kubectl --context ha-670527 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (61.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr: exit status 3 (3.191421455s)

                                                
                                                
-- stdout --
	ha-670527
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-670527-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:08:28.384560   31596 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:08:28.384671   31596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:08:28.384679   31596 out.go:358] Setting ErrFile to fd 2...
	I0915 07:08:28.384683   31596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:08:28.384839   31596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:08:28.385012   31596 out.go:352] Setting JSON to false
	I0915 07:08:28.385037   31596 mustload.go:65] Loading cluster: ha-670527
	I0915 07:08:28.385082   31596 notify.go:220] Checking for updates...
	I0915 07:08:28.385510   31596 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:08:28.385526   31596 status.go:255] checking status of ha-670527 ...
	I0915 07:08:28.386104   31596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:28.386145   31596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:28.406161   31596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45389
	I0915 07:08:28.406609   31596 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:28.407213   31596 main.go:141] libmachine: Using API Version  1
	I0915 07:08:28.407234   31596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:28.407653   31596 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:28.407850   31596 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:08:28.409514   31596 status.go:330] ha-670527 host status = "Running" (err=<nil>)
	I0915 07:08:28.409531   31596 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:08:28.409986   31596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:28.410035   31596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:28.424472   31596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0915 07:08:28.424935   31596 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:28.425391   31596 main.go:141] libmachine: Using API Version  1
	I0915 07:08:28.425408   31596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:28.425681   31596 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:28.426044   31596 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:08:28.428516   31596 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:28.428863   31596 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:08:28.428887   31596 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:28.428995   31596 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:08:28.429274   31596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:28.429306   31596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:28.443525   31596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
	I0915 07:08:28.443959   31596 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:28.444489   31596 main.go:141] libmachine: Using API Version  1
	I0915 07:08:28.444514   31596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:28.444819   31596 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:28.444983   31596 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:08:28.445205   31596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:28.445237   31596 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:08:28.447855   31596 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:28.448210   31596 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:08:28.448229   31596 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:28.448377   31596 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:08:28.448539   31596 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:08:28.448685   31596 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:08:28.448782   31596 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:08:28.529362   31596 ssh_runner.go:195] Run: systemctl --version
	I0915 07:08:28.535249   31596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:28.549254   31596 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:08:28.549285   31596 api_server.go:166] Checking apiserver status ...
	I0915 07:08:28.549313   31596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:08:28.563533   31596 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0915 07:08:28.573388   31596 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:08:28.573437   31596 ssh_runner.go:195] Run: ls
	I0915 07:08:28.579109   31596 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:08:28.587672   31596 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:08:28.587693   31596 status.go:422] ha-670527 apiserver status = Running (err=<nil>)
	I0915 07:08:28.587705   31596 status.go:257] ha-670527 status: &{Name:ha-670527 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:08:28.587728   31596 status.go:255] checking status of ha-670527-m02 ...
	I0915 07:08:28.588115   31596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:28.588158   31596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:28.602854   31596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I0915 07:08:28.603289   31596 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:28.603788   31596 main.go:141] libmachine: Using API Version  1
	I0915 07:08:28.603810   31596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:28.604124   31596 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:28.604308   31596 main.go:141] libmachine: (ha-670527-m02) Calling .GetState
	I0915 07:08:28.605833   31596 status.go:330] ha-670527-m02 host status = "Running" (err=<nil>)
	I0915 07:08:28.605850   31596 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:08:28.606242   31596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:28.606282   31596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:28.620560   31596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I0915 07:08:28.620918   31596 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:28.621396   31596 main.go:141] libmachine: Using API Version  1
	I0915 07:08:28.621416   31596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:28.621692   31596 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:28.621849   31596 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:08:28.624418   31596 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:28.624823   31596 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:08:28.624849   31596 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:28.624994   31596 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:08:28.625273   31596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:28.625304   31596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:28.640119   31596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0915 07:08:28.640580   31596 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:28.641078   31596 main.go:141] libmachine: Using API Version  1
	I0915 07:08:28.641114   31596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:28.641466   31596 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:28.641628   31596 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:08:28.641838   31596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:28.641862   31596 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:08:28.644529   31596 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:28.644891   31596 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:08:28.644908   31596 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:28.645077   31596 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:08:28.645196   31596 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:08:28.645330   31596 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:08:28.645450   31596 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	W0915 07:08:31.186165   31596 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0915 07:08:31.186277   31596 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0915 07:08:31.186294   31596 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:08:31.186302   31596 status.go:257] ha-670527-m02 status: &{Name:ha-670527-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0915 07:08:31.186319   31596 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:08:31.186326   31596 status.go:255] checking status of ha-670527-m03 ...
	I0915 07:08:31.186672   31596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:31.186743   31596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:31.201928   31596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44041
	I0915 07:08:31.202315   31596 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:31.202765   31596 main.go:141] libmachine: Using API Version  1
	I0915 07:08:31.202788   31596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:31.203148   31596 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:31.203348   31596 main.go:141] libmachine: (ha-670527-m03) Calling .GetState
	I0915 07:08:31.204685   31596 status.go:330] ha-670527-m03 host status = "Running" (err=<nil>)
	I0915 07:08:31.204699   31596 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:08:31.204988   31596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:31.205030   31596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:31.219546   31596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45699
	I0915 07:08:31.219982   31596 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:31.220479   31596 main.go:141] libmachine: Using API Version  1
	I0915 07:08:31.220497   31596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:31.220765   31596 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:31.220938   31596 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:08:31.223406   31596 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:31.223824   31596 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:08:31.223859   31596 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:31.223995   31596 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:08:31.224297   31596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:31.224329   31596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:31.238693   31596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45483
	I0915 07:08:31.239125   31596 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:31.239652   31596 main.go:141] libmachine: Using API Version  1
	I0915 07:08:31.239674   31596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:31.239945   31596 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:31.240099   31596 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:08:31.240339   31596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:31.240360   31596 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:08:31.243178   31596 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:31.243619   31596 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:08:31.243645   31596 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:31.243795   31596 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:08:31.243951   31596 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:08:31.244075   31596 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:08:31.244192   31596 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:08:31.325422   31596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:31.341231   31596 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:08:31.341263   31596 api_server.go:166] Checking apiserver status ...
	I0915 07:08:31.341305   31596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:08:31.356255   31596 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W0915 07:08:31.366619   31596 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:08:31.366703   31596 ssh_runner.go:195] Run: ls
	I0915 07:08:31.372199   31596 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:08:31.376446   31596 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:08:31.376469   31596 status.go:422] ha-670527-m03 apiserver status = Running (err=<nil>)
	I0915 07:08:31.376480   31596 status.go:257] ha-670527-m03 status: &{Name:ha-670527-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:08:31.376500   31596 status.go:255] checking status of ha-670527-m04 ...
	I0915 07:08:31.376803   31596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:31.376857   31596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:31.392009   31596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38787
	I0915 07:08:31.392488   31596 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:31.392928   31596 main.go:141] libmachine: Using API Version  1
	I0915 07:08:31.392947   31596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:31.393225   31596 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:31.393394   31596 main.go:141] libmachine: (ha-670527-m04) Calling .GetState
	I0915 07:08:31.394980   31596 status.go:330] ha-670527-m04 host status = "Running" (err=<nil>)
	I0915 07:08:31.394995   31596 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:08:31.395333   31596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:31.395372   31596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:31.410987   31596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35763
	I0915 07:08:31.411345   31596 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:31.411828   31596 main.go:141] libmachine: Using API Version  1
	I0915 07:08:31.411855   31596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:31.412125   31596 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:31.412321   31596 main.go:141] libmachine: (ha-670527-m04) Calling .GetIP
	I0915 07:08:31.415391   31596 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:31.415883   31596 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:08:31.415919   31596 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:31.416067   31596 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:08:31.416420   31596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:31.416464   31596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:31.431035   31596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0915 07:08:31.431521   31596 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:31.431992   31596 main.go:141] libmachine: Using API Version  1
	I0915 07:08:31.432013   31596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:31.432313   31596 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:31.432508   31596 main.go:141] libmachine: (ha-670527-m04) Calling .DriverName
	I0915 07:08:31.432671   31596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:31.432690   31596 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHHostname
	I0915 07:08:31.435497   31596 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:31.435910   31596 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:08:31.435933   31596 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:31.436074   31596 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHPort
	I0915 07:08:31.436214   31596 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHKeyPath
	I0915 07:08:31.436352   31596 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHUsername
	I0915 07:08:31.436452   31596 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m04/id_rsa Username:docker}
	I0915 07:08:31.517336   31596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:31.531274   31596 status.go:257] ha-670527-m04 status: &{Name:ha-670527-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr: exit status 3 (2.476261669s)

                                                
                                                
-- stdout --
	ha-670527
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-670527-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:08:32.169101   31697 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:08:32.169206   31697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:08:32.169218   31697 out.go:358] Setting ErrFile to fd 2...
	I0915 07:08:32.169224   31697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:08:32.169413   31697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:08:32.169609   31697 out.go:352] Setting JSON to false
	I0915 07:08:32.169640   31697 mustload.go:65] Loading cluster: ha-670527
	I0915 07:08:32.169727   31697 notify.go:220] Checking for updates...
	I0915 07:08:32.170135   31697 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:08:32.170150   31697 status.go:255] checking status of ha-670527 ...
	I0915 07:08:32.170558   31697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:32.170622   31697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:32.189651   31697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0915 07:08:32.190200   31697 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:32.190827   31697 main.go:141] libmachine: Using API Version  1
	I0915 07:08:32.190857   31697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:32.191148   31697 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:32.191303   31697 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:08:32.192830   31697 status.go:330] ha-670527 host status = "Running" (err=<nil>)
	I0915 07:08:32.192846   31697 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:08:32.193126   31697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:32.193156   31697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:32.207676   31697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0915 07:08:32.208026   31697 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:32.208522   31697 main.go:141] libmachine: Using API Version  1
	I0915 07:08:32.208545   31697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:32.208898   31697 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:32.209115   31697 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:08:32.211771   31697 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:32.212241   31697 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:08:32.212278   31697 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:32.212360   31697 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:08:32.212765   31697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:32.212809   31697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:32.227728   31697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46753
	I0915 07:08:32.228223   31697 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:32.228777   31697 main.go:141] libmachine: Using API Version  1
	I0915 07:08:32.228798   31697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:32.229091   31697 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:32.229280   31697 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:08:32.229509   31697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:32.229550   31697 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:08:32.232394   31697 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:32.232861   31697 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:08:32.232888   31697 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:32.233011   31697 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:08:32.233249   31697 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:08:32.233415   31697 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:08:32.233560   31697 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:08:32.327310   31697 ssh_runner.go:195] Run: systemctl --version
	I0915 07:08:32.333917   31697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:32.352404   31697 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:08:32.352440   31697 api_server.go:166] Checking apiserver status ...
	I0915 07:08:32.352483   31697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:08:32.367103   31697 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0915 07:08:32.377166   31697 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:08:32.377220   31697 ssh_runner.go:195] Run: ls
	I0915 07:08:32.382178   31697 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:08:32.386401   31697 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:08:32.386427   31697 status.go:422] ha-670527 apiserver status = Running (err=<nil>)
	I0915 07:08:32.386436   31697 status.go:257] ha-670527 status: &{Name:ha-670527 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:08:32.386450   31697 status.go:255] checking status of ha-670527-m02 ...
	I0915 07:08:32.386743   31697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:32.386779   31697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:32.401432   31697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36631
	I0915 07:08:32.401782   31697 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:32.402289   31697 main.go:141] libmachine: Using API Version  1
	I0915 07:08:32.402309   31697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:32.402587   31697 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:32.402769   31697 main.go:141] libmachine: (ha-670527-m02) Calling .GetState
	I0915 07:08:32.404340   31697 status.go:330] ha-670527-m02 host status = "Running" (err=<nil>)
	I0915 07:08:32.404357   31697 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:08:32.404642   31697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:32.404673   31697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:32.420685   31697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40321
	I0915 07:08:32.421014   31697 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:32.421635   31697 main.go:141] libmachine: Using API Version  1
	I0915 07:08:32.421663   31697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:32.421957   31697 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:32.422151   31697 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:08:32.424835   31697 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:32.425225   31697 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:08:32.425252   31697 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:32.425345   31697 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:08:32.425714   31697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:32.425752   31697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:32.441067   31697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I0915 07:08:32.441468   31697 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:32.441932   31697 main.go:141] libmachine: Using API Version  1
	I0915 07:08:32.441960   31697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:32.442262   31697 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:32.442448   31697 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:08:32.442625   31697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:32.442648   31697 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:08:32.445480   31697 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:32.445891   31697 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:08:32.445916   31697 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:32.446057   31697 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:08:32.446214   31697 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:08:32.446357   31697 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:08:32.446465   31697 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	W0915 07:08:34.258138   31697 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0915 07:08:34.258230   31697 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0915 07:08:34.258245   31697 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:08:34.258254   31697 status.go:257] ha-670527-m02 status: &{Name:ha-670527-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0915 07:08:34.258271   31697 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:08:34.258279   31697 status.go:255] checking status of ha-670527-m03 ...
	I0915 07:08:34.258595   31697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:34.258632   31697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:34.273573   31697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44709
	I0915 07:08:34.273996   31697 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:34.274446   31697 main.go:141] libmachine: Using API Version  1
	I0915 07:08:34.274464   31697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:34.274785   31697 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:34.274954   31697 main.go:141] libmachine: (ha-670527-m03) Calling .GetState
	I0915 07:08:34.276411   31697 status.go:330] ha-670527-m03 host status = "Running" (err=<nil>)
	I0915 07:08:34.276427   31697 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:08:34.276734   31697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:34.276768   31697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:34.290847   31697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36701
	I0915 07:08:34.291218   31697 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:34.291608   31697 main.go:141] libmachine: Using API Version  1
	I0915 07:08:34.291627   31697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:34.291875   31697 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:34.292015   31697 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:08:34.294478   31697 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:34.294830   31697 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:08:34.294853   31697 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:34.294949   31697 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:08:34.295234   31697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:34.295297   31697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:34.309305   31697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45421
	I0915 07:08:34.309758   31697 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:34.310294   31697 main.go:141] libmachine: Using API Version  1
	I0915 07:08:34.310317   31697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:34.310619   31697 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:34.310809   31697 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:08:34.310980   31697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:34.311003   31697 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:08:34.313712   31697 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:34.314132   31697 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:08:34.314146   31697 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:34.314291   31697 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:08:34.314425   31697 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:08:34.314584   31697 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:08:34.314725   31697 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:08:34.397660   31697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:34.414344   31697 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:08:34.414377   31697 api_server.go:166] Checking apiserver status ...
	I0915 07:08:34.414425   31697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:08:34.429753   31697 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W0915 07:08:34.440353   31697 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:08:34.440398   31697 ssh_runner.go:195] Run: ls
	I0915 07:08:34.444691   31697 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:08:34.448833   31697 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:08:34.448855   31697 status.go:422] ha-670527-m03 apiserver status = Running (err=<nil>)
	I0915 07:08:34.448865   31697 status.go:257] ha-670527-m03 status: &{Name:ha-670527-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:08:34.448884   31697 status.go:255] checking status of ha-670527-m04 ...
	I0915 07:08:34.449230   31697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:34.449264   31697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:34.463498   31697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I0915 07:08:34.463903   31697 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:34.464357   31697 main.go:141] libmachine: Using API Version  1
	I0915 07:08:34.464373   31697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:34.464655   31697 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:34.464829   31697 main.go:141] libmachine: (ha-670527-m04) Calling .GetState
	I0915 07:08:34.466381   31697 status.go:330] ha-670527-m04 host status = "Running" (err=<nil>)
	I0915 07:08:34.466399   31697 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:08:34.466780   31697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:34.466820   31697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:34.481192   31697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0915 07:08:34.481590   31697 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:34.482077   31697 main.go:141] libmachine: Using API Version  1
	I0915 07:08:34.482098   31697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:34.482439   31697 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:34.482600   31697 main.go:141] libmachine: (ha-670527-m04) Calling .GetIP
	I0915 07:08:34.485111   31697 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:34.485456   31697 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:08:34.485487   31697 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:34.485621   31697 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:08:34.486016   31697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:34.486056   31697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:34.501134   31697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I0915 07:08:34.501481   31697 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:34.501947   31697 main.go:141] libmachine: Using API Version  1
	I0915 07:08:34.501971   31697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:34.502305   31697 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:34.502498   31697 main.go:141] libmachine: (ha-670527-m04) Calling .DriverName
	I0915 07:08:34.502678   31697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:34.502702   31697 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHHostname
	I0915 07:08:34.505414   31697 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:34.505776   31697 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:08:34.505802   31697 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:34.505957   31697 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHPort
	I0915 07:08:34.506110   31697 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHKeyPath
	I0915 07:08:34.506241   31697 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHUsername
	I0915 07:08:34.506394   31697 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m04/id_rsa Username:docker}
	I0915 07:08:34.590235   31697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:34.604135   31697 status.go:257] ha-670527-m04 status: &{Name:ha-670527-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr: exit status 3 (5.207663463s)

                                                
                                                
-- stdout --
	ha-670527
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-670527-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:08:35.589753   31798 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:08:35.589899   31798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:08:35.589912   31798 out.go:358] Setting ErrFile to fd 2...
	I0915 07:08:35.589920   31798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:08:35.590110   31798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:08:35.590281   31798 out.go:352] Setting JSON to false
	I0915 07:08:35.590309   31798 mustload.go:65] Loading cluster: ha-670527
	I0915 07:08:35.590354   31798 notify.go:220] Checking for updates...
	I0915 07:08:35.590755   31798 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:08:35.590771   31798 status.go:255] checking status of ha-670527 ...
	I0915 07:08:35.591266   31798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:35.591303   31798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:35.610057   31798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I0915 07:08:35.610599   31798 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:35.611099   31798 main.go:141] libmachine: Using API Version  1
	I0915 07:08:35.611121   31798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:35.611425   31798 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:35.611596   31798 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:08:35.612960   31798 status.go:330] ha-670527 host status = "Running" (err=<nil>)
	I0915 07:08:35.612973   31798 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:08:35.613252   31798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:35.613301   31798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:35.627906   31798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44643
	I0915 07:08:35.628288   31798 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:35.628766   31798 main.go:141] libmachine: Using API Version  1
	I0915 07:08:35.628789   31798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:35.629055   31798 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:35.629301   31798 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:08:35.631798   31798 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:35.632218   31798 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:08:35.632244   31798 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:35.632385   31798 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:08:35.632662   31798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:35.632699   31798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:35.646927   31798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37849
	I0915 07:08:35.647355   31798 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:35.647796   31798 main.go:141] libmachine: Using API Version  1
	I0915 07:08:35.647816   31798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:35.648106   31798 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:35.648274   31798 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:08:35.648464   31798 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:35.648493   31798 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:08:35.651242   31798 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:35.651660   31798 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:08:35.651683   31798 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:35.651838   31798 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:08:35.652002   31798 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:08:35.652116   31798 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:08:35.652241   31798 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:08:35.733863   31798 ssh_runner.go:195] Run: systemctl --version
	I0915 07:08:35.740033   31798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:35.754197   31798 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:08:35.754228   31798 api_server.go:166] Checking apiserver status ...
	I0915 07:08:35.754258   31798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:08:35.768169   31798 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0915 07:08:35.777397   31798 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:08:35.777453   31798 ssh_runner.go:195] Run: ls
	I0915 07:08:35.781765   31798 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:08:35.785698   31798 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:08:35.785719   31798 status.go:422] ha-670527 apiserver status = Running (err=<nil>)
	I0915 07:08:35.785728   31798 status.go:257] ha-670527 status: &{Name:ha-670527 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:08:35.785745   31798 status.go:255] checking status of ha-670527-m02 ...
	I0915 07:08:35.786108   31798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:35.786154   31798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:35.801138   31798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45219
	I0915 07:08:35.801579   31798 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:35.802047   31798 main.go:141] libmachine: Using API Version  1
	I0915 07:08:35.802065   31798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:35.802387   31798 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:35.802570   31798 main.go:141] libmachine: (ha-670527-m02) Calling .GetState
	I0915 07:08:35.804108   31798 status.go:330] ha-670527-m02 host status = "Running" (err=<nil>)
	I0915 07:08:35.804123   31798 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:08:35.804449   31798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:35.804484   31798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:35.819373   31798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38613
	I0915 07:08:35.819753   31798 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:35.820185   31798 main.go:141] libmachine: Using API Version  1
	I0915 07:08:35.820206   31798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:35.820542   31798 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:35.820699   31798 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:08:35.823605   31798 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:35.823960   31798 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:08:35.823985   31798 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:35.824078   31798 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:08:35.824357   31798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:35.824389   31798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:35.839207   31798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33701
	I0915 07:08:35.839763   31798 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:35.840159   31798 main.go:141] libmachine: Using API Version  1
	I0915 07:08:35.840176   31798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:35.840456   31798 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:35.840601   31798 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:08:35.840783   31798 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:35.840803   31798 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:08:35.843499   31798 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:35.843926   31798 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:08:35.843950   31798 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:35.844053   31798 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:08:35.844247   31798 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:08:35.844421   31798 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:08:35.844566   31798 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	W0915 07:08:37.330086   31798 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:08:37.330138   31798 retry.go:31] will retry after 302.777238ms: dial tcp 192.168.39.222:22: connect: no route to host
	W0915 07:08:40.402141   31798 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0915 07:08:40.402238   31798 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0915 07:08:40.402256   31798 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:08:40.402269   31798 status.go:257] ha-670527-m02 status: &{Name:ha-670527-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0915 07:08:40.402305   31798 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:08:40.402318   31798 status.go:255] checking status of ha-670527-m03 ...
	I0915 07:08:40.402764   31798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:40.402826   31798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:40.421087   31798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37423
	I0915 07:08:40.421516   31798 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:40.422157   31798 main.go:141] libmachine: Using API Version  1
	I0915 07:08:40.422185   31798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:40.422491   31798 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:40.422685   31798 main.go:141] libmachine: (ha-670527-m03) Calling .GetState
	I0915 07:08:40.424232   31798 status.go:330] ha-670527-m03 host status = "Running" (err=<nil>)
	I0915 07:08:40.424251   31798 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:08:40.424560   31798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:40.424599   31798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:40.439601   31798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0915 07:08:40.439966   31798 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:40.440425   31798 main.go:141] libmachine: Using API Version  1
	I0915 07:08:40.440445   31798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:40.440750   31798 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:40.440931   31798 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:08:40.443665   31798 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:40.444035   31798 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:08:40.444070   31798 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:40.444218   31798 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:08:40.444534   31798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:40.444568   31798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:40.459227   31798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46653
	I0915 07:08:40.459675   31798 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:40.460137   31798 main.go:141] libmachine: Using API Version  1
	I0915 07:08:40.460158   31798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:40.460429   31798 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:40.460622   31798 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:08:40.460785   31798 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:40.460809   31798 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:08:40.463411   31798 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:40.463801   31798 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:08:40.463827   31798 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:40.463925   31798 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:08:40.464105   31798 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:08:40.464249   31798 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:08:40.464392   31798 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:08:40.549742   31798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:40.564290   31798 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:08:40.564315   31798 api_server.go:166] Checking apiserver status ...
	I0915 07:08:40.564354   31798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:08:40.578311   31798 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W0915 07:08:40.589499   31798 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:08:40.589554   31798 ssh_runner.go:195] Run: ls
	I0915 07:08:40.594613   31798 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:08:40.598582   31798 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:08:40.598600   31798 status.go:422] ha-670527-m03 apiserver status = Running (err=<nil>)
	I0915 07:08:40.598608   31798 status.go:257] ha-670527-m03 status: &{Name:ha-670527-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:08:40.598626   31798 status.go:255] checking status of ha-670527-m04 ...
	I0915 07:08:40.598903   31798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:40.598932   31798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:40.614591   31798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0915 07:08:40.614961   31798 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:40.615346   31798 main.go:141] libmachine: Using API Version  1
	I0915 07:08:40.615377   31798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:40.615689   31798 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:40.615858   31798 main.go:141] libmachine: (ha-670527-m04) Calling .GetState
	I0915 07:08:40.617250   31798 status.go:330] ha-670527-m04 host status = "Running" (err=<nil>)
	I0915 07:08:40.617264   31798 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:08:40.617527   31798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:40.617557   31798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:40.632668   31798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44421
	I0915 07:08:40.632995   31798 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:40.633469   31798 main.go:141] libmachine: Using API Version  1
	I0915 07:08:40.633492   31798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:40.633838   31798 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:40.634018   31798 main.go:141] libmachine: (ha-670527-m04) Calling .GetIP
	I0915 07:08:40.636964   31798 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:40.637376   31798 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:08:40.637412   31798 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:40.637491   31798 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:08:40.637791   31798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:40.637850   31798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:40.654797   31798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0915 07:08:40.655218   31798 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:40.655764   31798 main.go:141] libmachine: Using API Version  1
	I0915 07:08:40.655789   31798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:40.656122   31798 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:40.656337   31798 main.go:141] libmachine: (ha-670527-m04) Calling .DriverName
	I0915 07:08:40.656532   31798 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:40.656558   31798 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHHostname
	I0915 07:08:40.659501   31798 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:40.660029   31798 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:08:40.660052   31798 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:40.660124   31798 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHPort
	I0915 07:08:40.660290   31798 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHKeyPath
	I0915 07:08:40.660421   31798 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHUsername
	I0915 07:08:40.660549   31798 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m04/id_rsa Username:docker}
	I0915 07:08:40.741140   31798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:40.755960   31798 status.go:257] ha-670527-m04 status: &{Name:ha-670527-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
E0915 07:08:46.544550   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr: exit status 3 (4.866569558s)

                                                
                                                
-- stdout --
	ha-670527
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-670527-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:08:42.073265   31898 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:08:42.073389   31898 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:08:42.073398   31898 out.go:358] Setting ErrFile to fd 2...
	I0915 07:08:42.073407   31898 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:08:42.073610   31898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:08:42.073799   31898 out.go:352] Setting JSON to false
	I0915 07:08:42.073854   31898 mustload.go:65] Loading cluster: ha-670527
	I0915 07:08:42.073948   31898 notify.go:220] Checking for updates...
	I0915 07:08:42.074320   31898 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:08:42.074335   31898 status.go:255] checking status of ha-670527 ...
	I0915 07:08:42.074755   31898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:42.074819   31898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:42.092367   31898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0915 07:08:42.092823   31898 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:42.093509   31898 main.go:141] libmachine: Using API Version  1
	I0915 07:08:42.093538   31898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:42.093880   31898 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:42.094035   31898 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:08:42.095522   31898 status.go:330] ha-670527 host status = "Running" (err=<nil>)
	I0915 07:08:42.095544   31898 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:08:42.095862   31898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:42.095892   31898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:42.113256   31898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35509
	I0915 07:08:42.113711   31898 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:42.114209   31898 main.go:141] libmachine: Using API Version  1
	I0915 07:08:42.114235   31898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:42.114578   31898 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:42.114732   31898 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:08:42.117596   31898 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:42.118050   31898 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:08:42.118087   31898 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:42.118214   31898 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:08:42.118536   31898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:42.118574   31898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:42.133765   31898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0915 07:08:42.134234   31898 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:42.134748   31898 main.go:141] libmachine: Using API Version  1
	I0915 07:08:42.134780   31898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:42.135092   31898 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:42.135248   31898 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:08:42.135420   31898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:42.135451   31898 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:08:42.138156   31898 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:42.138549   31898 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:08:42.138582   31898 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:42.138725   31898 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:08:42.138887   31898 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:08:42.139030   31898 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:08:42.139158   31898 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:08:42.222103   31898 ssh_runner.go:195] Run: systemctl --version
	I0915 07:08:42.228312   31898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:42.245101   31898 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:08:42.245137   31898 api_server.go:166] Checking apiserver status ...
	I0915 07:08:42.245175   31898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:08:42.260224   31898 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0915 07:08:42.269748   31898 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:08:42.269832   31898 ssh_runner.go:195] Run: ls
	I0915 07:08:42.274640   31898 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:08:42.278518   31898 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:08:42.278536   31898 status.go:422] ha-670527 apiserver status = Running (err=<nil>)
	I0915 07:08:42.278545   31898 status.go:257] ha-670527 status: &{Name:ha-670527 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:08:42.278587   31898 status.go:255] checking status of ha-670527-m02 ...
	I0915 07:08:42.278856   31898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:42.278902   31898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:42.293314   31898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40925
	I0915 07:08:42.293748   31898 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:42.294180   31898 main.go:141] libmachine: Using API Version  1
	I0915 07:08:42.294207   31898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:42.294529   31898 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:42.294710   31898 main.go:141] libmachine: (ha-670527-m02) Calling .GetState
	I0915 07:08:42.296062   31898 status.go:330] ha-670527-m02 host status = "Running" (err=<nil>)
	I0915 07:08:42.296079   31898 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:08:42.296406   31898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:42.296442   31898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:42.310960   31898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46825
	I0915 07:08:42.311258   31898 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:42.311679   31898 main.go:141] libmachine: Using API Version  1
	I0915 07:08:42.311706   31898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:42.311994   31898 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:42.312176   31898 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:08:42.314962   31898 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:42.315387   31898 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:08:42.315416   31898 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:42.315562   31898 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:08:42.315956   31898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:42.316000   31898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:42.331275   31898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37155
	I0915 07:08:42.331661   31898 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:42.332062   31898 main.go:141] libmachine: Using API Version  1
	I0915 07:08:42.332080   31898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:42.332415   31898 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:42.332575   31898 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:08:42.332729   31898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:42.332748   31898 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:08:42.335153   31898 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:42.335517   31898 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:08:42.335551   31898 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:42.335675   31898 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:08:42.335826   31898 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:08:42.335941   31898 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:08:42.336038   31898 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	W0915 07:08:43.474142   31898 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:08:43.474191   31898 retry.go:31] will retry after 188.652021ms: dial tcp 192.168.39.222:22: connect: no route to host
	W0915 07:08:46.546085   31898 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0915 07:08:46.546177   31898 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0915 07:08:46.546195   31898 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:08:46.546211   31898 status.go:257] ha-670527-m02 status: &{Name:ha-670527-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0915 07:08:46.546238   31898 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:08:46.546252   31898 status.go:255] checking status of ha-670527-m03 ...
	I0915 07:08:46.546667   31898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:46.546718   31898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:46.562831   31898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34909
	I0915 07:08:46.563336   31898 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:46.563816   31898 main.go:141] libmachine: Using API Version  1
	I0915 07:08:46.563838   31898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:46.564116   31898 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:46.564298   31898 main.go:141] libmachine: (ha-670527-m03) Calling .GetState
	I0915 07:08:46.565764   31898 status.go:330] ha-670527-m03 host status = "Running" (err=<nil>)
	I0915 07:08:46.565781   31898 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:08:46.566101   31898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:46.566133   31898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:46.582218   31898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35629
	I0915 07:08:46.582728   31898 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:46.583261   31898 main.go:141] libmachine: Using API Version  1
	I0915 07:08:46.583289   31898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:46.583684   31898 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:46.583906   31898 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:08:46.587317   31898 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:46.587771   31898 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:08:46.587803   31898 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:46.587926   31898 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:08:46.588286   31898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:46.588353   31898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:46.602919   31898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0915 07:08:46.603346   31898 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:46.603786   31898 main.go:141] libmachine: Using API Version  1
	I0915 07:08:46.603804   31898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:46.604119   31898 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:46.604310   31898 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:08:46.604472   31898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:46.604493   31898 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:08:46.607058   31898 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:46.607457   31898 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:08:46.607481   31898 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:46.607636   31898 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:08:46.607762   31898 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:08:46.607898   31898 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:08:46.608021   31898 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:08:46.689209   31898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:46.706788   31898 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:08:46.706816   31898 api_server.go:166] Checking apiserver status ...
	I0915 07:08:46.706856   31898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:08:46.724023   31898 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W0915 07:08:46.735844   31898 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:08:46.735905   31898 ssh_runner.go:195] Run: ls
	I0915 07:08:46.740208   31898 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:08:46.744453   31898 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:08:46.744475   31898 status.go:422] ha-670527-m03 apiserver status = Running (err=<nil>)
	I0915 07:08:46.744483   31898 status.go:257] ha-670527-m03 status: &{Name:ha-670527-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:08:46.744497   31898 status.go:255] checking status of ha-670527-m04 ...
	I0915 07:08:46.744829   31898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:46.744864   31898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:46.759297   31898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41591
	I0915 07:08:46.759690   31898 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:46.760126   31898 main.go:141] libmachine: Using API Version  1
	I0915 07:08:46.760144   31898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:46.760456   31898 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:46.760624   31898 main.go:141] libmachine: (ha-670527-m04) Calling .GetState
	I0915 07:08:46.762201   31898 status.go:330] ha-670527-m04 host status = "Running" (err=<nil>)
	I0915 07:08:46.762215   31898 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:08:46.762490   31898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:46.762516   31898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:46.776904   31898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45113
	I0915 07:08:46.777368   31898 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:46.777787   31898 main.go:141] libmachine: Using API Version  1
	I0915 07:08:46.777815   31898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:46.778144   31898 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:46.778322   31898 main.go:141] libmachine: (ha-670527-m04) Calling .GetIP
	I0915 07:08:46.780963   31898 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:46.781358   31898 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:08:46.781389   31898 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:46.781550   31898 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:08:46.781875   31898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:46.781918   31898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:46.796768   31898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
	I0915 07:08:46.797183   31898 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:46.797672   31898 main.go:141] libmachine: Using API Version  1
	I0915 07:08:46.797692   31898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:46.798028   31898 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:46.798237   31898 main.go:141] libmachine: (ha-670527-m04) Calling .DriverName
	I0915 07:08:46.798503   31898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:46.798525   31898 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHHostname
	I0915 07:08:46.801248   31898 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:46.801750   31898 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:08:46.801776   31898 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:46.801917   31898 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHPort
	I0915 07:08:46.802076   31898 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHKeyPath
	I0915 07:08:46.802211   31898 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHUsername
	I0915 07:08:46.802358   31898 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m04/id_rsa Username:docker}
	I0915 07:08:46.881550   31898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:46.897018   31898 status.go:257] ha-670527-m04 status: &{Name:ha-670527-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr: exit status 3 (3.730356466s)

                                                
                                                
-- stdout --
	ha-670527
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-670527-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:08:51.779607   32014 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:08:51.779703   32014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:08:51.779711   32014 out.go:358] Setting ErrFile to fd 2...
	I0915 07:08:51.779714   32014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:08:51.779872   32014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:08:51.780018   32014 out.go:352] Setting JSON to false
	I0915 07:08:51.780043   32014 mustload.go:65] Loading cluster: ha-670527
	I0915 07:08:51.780187   32014 notify.go:220] Checking for updates...
	I0915 07:08:51.780451   32014 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:08:51.780465   32014 status.go:255] checking status of ha-670527 ...
	I0915 07:08:51.780892   32014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:51.780953   32014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:51.799713   32014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40555
	I0915 07:08:51.800129   32014 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:51.800698   32014 main.go:141] libmachine: Using API Version  1
	I0915 07:08:51.800728   32014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:51.801072   32014 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:51.801246   32014 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:08:51.802773   32014 status.go:330] ha-670527 host status = "Running" (err=<nil>)
	I0915 07:08:51.802790   32014 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:08:51.803055   32014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:51.803110   32014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:51.817693   32014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34621
	I0915 07:08:51.818124   32014 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:51.818586   32014 main.go:141] libmachine: Using API Version  1
	I0915 07:08:51.818606   32014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:51.818946   32014 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:51.819124   32014 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:08:51.822188   32014 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:51.822637   32014 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:08:51.822663   32014 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:51.822820   32014 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:08:51.823146   32014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:51.823198   32014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:51.838138   32014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0915 07:08:51.838595   32014 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:51.839106   32014 main.go:141] libmachine: Using API Version  1
	I0915 07:08:51.839142   32014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:51.839474   32014 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:51.839707   32014 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:08:51.839888   32014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:51.839928   32014 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:08:51.842790   32014 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:51.843257   32014 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:08:51.843305   32014 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:08:51.843591   32014 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:08:51.843832   32014 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:08:51.843982   32014 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:08:51.844165   32014 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:08:51.925982   32014 ssh_runner.go:195] Run: systemctl --version
	I0915 07:08:51.932346   32014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:51.947586   32014 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:08:51.947627   32014 api_server.go:166] Checking apiserver status ...
	I0915 07:08:51.947666   32014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:08:51.962241   32014 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0915 07:08:51.973864   32014 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:08:51.973917   32014 ssh_runner.go:195] Run: ls
	I0915 07:08:51.978811   32014 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:08:51.982890   32014 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:08:51.982911   32014 status.go:422] ha-670527 apiserver status = Running (err=<nil>)
	I0915 07:08:51.982920   32014 status.go:257] ha-670527 status: &{Name:ha-670527 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:08:51.982935   32014 status.go:255] checking status of ha-670527-m02 ...
	I0915 07:08:51.983221   32014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:51.983252   32014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:51.998337   32014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36349
	I0915 07:08:51.998765   32014 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:51.999237   32014 main.go:141] libmachine: Using API Version  1
	I0915 07:08:51.999264   32014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:51.999586   32014 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:51.999786   32014 main.go:141] libmachine: (ha-670527-m02) Calling .GetState
	I0915 07:08:52.001460   32014 status.go:330] ha-670527-m02 host status = "Running" (err=<nil>)
	I0915 07:08:52.001481   32014 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:08:52.001869   32014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:52.001915   32014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:52.016732   32014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0915 07:08:52.017193   32014 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:52.017647   32014 main.go:141] libmachine: Using API Version  1
	I0915 07:08:52.017678   32014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:52.017993   32014 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:52.018194   32014 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:08:52.021264   32014 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:52.021700   32014 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:08:52.021734   32014 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:52.021854   32014 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:08:52.022172   32014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:52.022212   32014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:52.036852   32014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I0915 07:08:52.037324   32014 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:52.037773   32014 main.go:141] libmachine: Using API Version  1
	I0915 07:08:52.037797   32014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:52.038154   32014 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:52.038409   32014 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:08:52.038577   32014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:52.038623   32014 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:08:52.041235   32014 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:52.041618   32014 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:08:52.041635   32014 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:08:52.041794   32014 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:08:52.041981   32014 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:08:52.042122   32014 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:08:52.042252   32014 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	W0915 07:08:55.122039   32014 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0915 07:08:55.122150   32014 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0915 07:08:55.122174   32014 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:08:55.122187   32014 status.go:257] ha-670527-m02 status: &{Name:ha-670527-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0915 07:08:55.122220   32014 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:08:55.122232   32014 status.go:255] checking status of ha-670527-m03 ...
	I0915 07:08:55.122540   32014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:55.122590   32014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:55.136945   32014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I0915 07:08:55.137385   32014 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:55.137826   32014 main.go:141] libmachine: Using API Version  1
	I0915 07:08:55.137852   32014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:55.138148   32014 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:55.138332   32014 main.go:141] libmachine: (ha-670527-m03) Calling .GetState
	I0915 07:08:55.139794   32014 status.go:330] ha-670527-m03 host status = "Running" (err=<nil>)
	I0915 07:08:55.139809   32014 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:08:55.140094   32014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:55.140126   32014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:55.154231   32014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35909
	I0915 07:08:55.154630   32014 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:55.155151   32014 main.go:141] libmachine: Using API Version  1
	I0915 07:08:55.155179   32014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:55.155521   32014 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:55.155706   32014 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:08:55.158456   32014 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:55.158848   32014 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:08:55.158876   32014 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:55.159013   32014 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:08:55.159378   32014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:55.159416   32014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:55.174100   32014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0915 07:08:55.174630   32014 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:55.175086   32014 main.go:141] libmachine: Using API Version  1
	I0915 07:08:55.175113   32014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:55.175417   32014 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:55.175591   32014 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:08:55.175777   32014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:55.175885   32014 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:08:55.178848   32014 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:55.179263   32014 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:08:55.179301   32014 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:08:55.179435   32014 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:08:55.179621   32014 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:08:55.179811   32014 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:08:55.179966   32014 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:08:55.262173   32014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:55.282041   32014 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:08:55.282065   32014 api_server.go:166] Checking apiserver status ...
	I0915 07:08:55.282098   32014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:08:55.296302   32014 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W0915 07:08:55.307406   32014 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:08:55.307449   32014 ssh_runner.go:195] Run: ls
	I0915 07:08:55.312161   32014 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:08:55.317203   32014 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:08:55.317221   32014 status.go:422] ha-670527-m03 apiserver status = Running (err=<nil>)
	I0915 07:08:55.317228   32014 status.go:257] ha-670527-m03 status: &{Name:ha-670527-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:08:55.317241   32014 status.go:255] checking status of ha-670527-m04 ...
	I0915 07:08:55.317507   32014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:55.317538   32014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:55.331789   32014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43851
	I0915 07:08:55.332198   32014 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:55.332646   32014 main.go:141] libmachine: Using API Version  1
	I0915 07:08:55.332664   32014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:55.332961   32014 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:55.333118   32014 main.go:141] libmachine: (ha-670527-m04) Calling .GetState
	I0915 07:08:55.334657   32014 status.go:330] ha-670527-m04 host status = "Running" (err=<nil>)
	I0915 07:08:55.334670   32014 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:08:55.335002   32014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:55.335042   32014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:55.349242   32014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33917
	I0915 07:08:55.349583   32014 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:55.350088   32014 main.go:141] libmachine: Using API Version  1
	I0915 07:08:55.350109   32014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:55.350459   32014 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:55.350642   32014 main.go:141] libmachine: (ha-670527-m04) Calling .GetIP
	I0915 07:08:55.353190   32014 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:55.353635   32014 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:08:55.353669   32014 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:55.353775   32014 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:08:55.354200   32014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:08:55.354247   32014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:08:55.369093   32014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39151
	I0915 07:08:55.369550   32014 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:08:55.370026   32014 main.go:141] libmachine: Using API Version  1
	I0915 07:08:55.370050   32014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:08:55.370368   32014 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:08:55.370545   32014 main.go:141] libmachine: (ha-670527-m04) Calling .DriverName
	I0915 07:08:55.370720   32014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:08:55.370800   32014 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHHostname
	I0915 07:08:55.373528   32014 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:55.373922   32014 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:08:55.373951   32014 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:08:55.374080   32014 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHPort
	I0915 07:08:55.374251   32014 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHKeyPath
	I0915 07:08:55.374394   32014 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHUsername
	I0915 07:08:55.374494   32014 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m04/id_rsa Username:docker}
	I0915 07:08:55.453028   32014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:08:55.466569   32014 status.go:257] ha-670527-m04 status: &{Name:ha-670527-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr: exit status 3 (3.738883424s)

                                                
                                                
-- stdout --
	ha-670527
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-670527-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:09:01.218674   32131 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:09:01.218768   32131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:09:01.218775   32131 out.go:358] Setting ErrFile to fd 2...
	I0915 07:09:01.218780   32131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:09:01.218942   32131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:09:01.219105   32131 out.go:352] Setting JSON to false
	I0915 07:09:01.219133   32131 mustload.go:65] Loading cluster: ha-670527
	I0915 07:09:01.219239   32131 notify.go:220] Checking for updates...
	I0915 07:09:01.219511   32131 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:09:01.219535   32131 status.go:255] checking status of ha-670527 ...
	I0915 07:09:01.219920   32131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:01.219994   32131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:01.236244   32131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32797
	I0915 07:09:01.236702   32131 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:01.237358   32131 main.go:141] libmachine: Using API Version  1
	I0915 07:09:01.237397   32131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:01.237787   32131 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:01.237995   32131 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:09:01.239644   32131 status.go:330] ha-670527 host status = "Running" (err=<nil>)
	I0915 07:09:01.239661   32131 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:09:01.239937   32131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:01.239975   32131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:01.254481   32131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35729
	I0915 07:09:01.254927   32131 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:01.255382   32131 main.go:141] libmachine: Using API Version  1
	I0915 07:09:01.255402   32131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:01.255717   32131 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:01.256042   32131 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:09:01.258642   32131 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:09:01.259051   32131 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:09:01.259080   32131 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:09:01.259250   32131 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:09:01.259522   32131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:01.259569   32131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:01.273858   32131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33595
	I0915 07:09:01.274277   32131 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:01.274690   32131 main.go:141] libmachine: Using API Version  1
	I0915 07:09:01.274703   32131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:01.274998   32131 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:01.275201   32131 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:09:01.275358   32131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:09:01.275378   32131 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:09:01.277973   32131 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:09:01.278384   32131 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:09:01.278421   32131 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:09:01.278546   32131 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:09:01.278710   32131 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:09:01.278842   32131 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:09:01.278963   32131 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:09:01.362612   32131 ssh_runner.go:195] Run: systemctl --version
	I0915 07:09:01.370297   32131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:09:01.386157   32131 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:09:01.386188   32131 api_server.go:166] Checking apiserver status ...
	I0915 07:09:01.386223   32131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:09:01.401250   32131 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0915 07:09:01.413207   32131 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:09:01.413267   32131 ssh_runner.go:195] Run: ls
	I0915 07:09:01.418945   32131 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:09:01.424982   32131 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:09:01.425004   32131 status.go:422] ha-670527 apiserver status = Running (err=<nil>)
	I0915 07:09:01.425013   32131 status.go:257] ha-670527 status: &{Name:ha-670527 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:09:01.425032   32131 status.go:255] checking status of ha-670527-m02 ...
	I0915 07:09:01.425358   32131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:01.425400   32131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:01.440204   32131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38325
	I0915 07:09:01.440690   32131 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:01.441172   32131 main.go:141] libmachine: Using API Version  1
	I0915 07:09:01.441199   32131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:01.441522   32131 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:01.441684   32131 main.go:141] libmachine: (ha-670527-m02) Calling .GetState
	I0915 07:09:01.443144   32131 status.go:330] ha-670527-m02 host status = "Running" (err=<nil>)
	I0915 07:09:01.443161   32131 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:09:01.443449   32131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:01.443491   32131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:01.459842   32131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45203
	I0915 07:09:01.460248   32131 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:01.460699   32131 main.go:141] libmachine: Using API Version  1
	I0915 07:09:01.460724   32131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:01.461014   32131 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:01.461186   32131 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:09:01.463955   32131 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:09:01.464415   32131 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:09:01.464438   32131 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:09:01.464584   32131 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:09:01.464854   32131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:01.464891   32131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:01.480098   32131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I0915 07:09:01.480568   32131 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:01.480999   32131 main.go:141] libmachine: Using API Version  1
	I0915 07:09:01.481018   32131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:01.481335   32131 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:01.481535   32131 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:09:01.481682   32131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:09:01.481710   32131 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:09:01.484546   32131 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:09:01.484970   32131 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:09:01.484990   32131 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:09:01.485127   32131 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:09:01.485321   32131 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:09:01.485466   32131 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:09:01.485574   32131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	W0915 07:09:04.562067   32131 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0915 07:09:04.562147   32131 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0915 07:09:04.562161   32131 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:09:04.562168   32131 status.go:257] ha-670527-m02 status: &{Name:ha-670527-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0915 07:09:04.562185   32131 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0915 07:09:04.562192   32131 status.go:255] checking status of ha-670527-m03 ...
	I0915 07:09:04.562500   32131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:04.562542   32131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:04.577006   32131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40501
	I0915 07:09:04.577458   32131 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:04.577931   32131 main.go:141] libmachine: Using API Version  1
	I0915 07:09:04.577945   32131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:04.578224   32131 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:04.578399   32131 main.go:141] libmachine: (ha-670527-m03) Calling .GetState
	I0915 07:09:04.579658   32131 status.go:330] ha-670527-m03 host status = "Running" (err=<nil>)
	I0915 07:09:04.579681   32131 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:09:04.579953   32131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:04.579986   32131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:04.595311   32131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46705
	I0915 07:09:04.595782   32131 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:04.596336   32131 main.go:141] libmachine: Using API Version  1
	I0915 07:09:04.596359   32131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:04.596643   32131 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:04.596816   32131 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:09:04.599653   32131 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:09:04.600003   32131 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:09:04.600030   32131 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:09:04.600225   32131 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:09:04.600534   32131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:04.600578   32131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:04.615661   32131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0915 07:09:04.616112   32131 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:04.616635   32131 main.go:141] libmachine: Using API Version  1
	I0915 07:09:04.616665   32131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:04.616958   32131 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:04.617165   32131 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:09:04.617343   32131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:09:04.617360   32131 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:09:04.620176   32131 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:09:04.620600   32131 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:09:04.620627   32131 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:09:04.620762   32131 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:09:04.620927   32131 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:09:04.621087   32131 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:09:04.621227   32131 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:09:04.701307   32131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:09:04.718505   32131 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:09:04.718530   32131 api_server.go:166] Checking apiserver status ...
	I0915 07:09:04.718572   32131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:09:04.733097   32131 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W0915 07:09:04.742874   32131 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:09:04.742937   32131 ssh_runner.go:195] Run: ls
	I0915 07:09:04.747924   32131 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:09:04.754014   32131 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:09:04.754036   32131 status.go:422] ha-670527-m03 apiserver status = Running (err=<nil>)
	I0915 07:09:04.754044   32131 status.go:257] ha-670527-m03 status: &{Name:ha-670527-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:09:04.754058   32131 status.go:255] checking status of ha-670527-m04 ...
	I0915 07:09:04.754374   32131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:04.754416   32131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:04.770378   32131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39145
	I0915 07:09:04.770903   32131 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:04.771380   32131 main.go:141] libmachine: Using API Version  1
	I0915 07:09:04.771395   32131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:04.771674   32131 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:04.771852   32131 main.go:141] libmachine: (ha-670527-m04) Calling .GetState
	I0915 07:09:04.773205   32131 status.go:330] ha-670527-m04 host status = "Running" (err=<nil>)
	I0915 07:09:04.773220   32131 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:09:04.773513   32131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:04.773561   32131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:04.788515   32131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43643
	I0915 07:09:04.788900   32131 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:04.789308   32131 main.go:141] libmachine: Using API Version  1
	I0915 07:09:04.789328   32131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:04.789659   32131 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:04.789901   32131 main.go:141] libmachine: (ha-670527-m04) Calling .GetIP
	I0915 07:09:04.792654   32131 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:09:04.793054   32131 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:09:04.793088   32131 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:09:04.793196   32131 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:09:04.793627   32131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:04.793696   32131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:04.808997   32131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35805
	I0915 07:09:04.809437   32131 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:04.809933   32131 main.go:141] libmachine: Using API Version  1
	I0915 07:09:04.809958   32131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:04.810282   32131 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:04.810489   32131 main.go:141] libmachine: (ha-670527-m04) Calling .DriverName
	I0915 07:09:04.810675   32131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:09:04.810697   32131 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHHostname
	I0915 07:09:04.813563   32131 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:09:04.814004   32131 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:09:04.814025   32131 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:09:04.814140   32131 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHPort
	I0915 07:09:04.814319   32131 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHKeyPath
	I0915 07:09:04.814483   32131 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHUsername
	I0915 07:09:04.814633   32131 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m04/id_rsa Username:docker}
	I0915 07:09:04.898090   32131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:09:04.913429   32131 status.go:257] ha-670527-m04 status: &{Name:ha-670527-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr: exit status 7 (620.182402ms)

                                                
                                                
-- stdout --
	ha-670527
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-670527-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:09:11.531338   32267 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:09:11.531593   32267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:09:11.531603   32267 out.go:358] Setting ErrFile to fd 2...
	I0915 07:09:11.531610   32267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:09:11.531806   32267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:09:11.531986   32267 out.go:352] Setting JSON to false
	I0915 07:09:11.532022   32267 mustload.go:65] Loading cluster: ha-670527
	I0915 07:09:11.532122   32267 notify.go:220] Checking for updates...
	I0915 07:09:11.532480   32267 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:09:11.532496   32267 status.go:255] checking status of ha-670527 ...
	I0915 07:09:11.532945   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:11.533008   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:11.552444   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45169
	I0915 07:09:11.552870   32267 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:11.553359   32267 main.go:141] libmachine: Using API Version  1
	I0915 07:09:11.553386   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:11.553758   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:11.554027   32267 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:09:11.555846   32267 status.go:330] ha-670527 host status = "Running" (err=<nil>)
	I0915 07:09:11.555866   32267 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:09:11.556202   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:11.556236   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:11.571158   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43605
	I0915 07:09:11.571612   32267 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:11.572192   32267 main.go:141] libmachine: Using API Version  1
	I0915 07:09:11.572214   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:11.572551   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:11.572728   32267 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:09:11.575391   32267 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:09:11.575749   32267 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:09:11.575782   32267 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:09:11.575898   32267 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:09:11.576183   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:11.576216   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:11.593720   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I0915 07:09:11.594224   32267 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:11.594815   32267 main.go:141] libmachine: Using API Version  1
	I0915 07:09:11.594839   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:11.595195   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:11.595385   32267 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:09:11.595572   32267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:09:11.595597   32267 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:09:11.598770   32267 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:09:11.599175   32267 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:09:11.599211   32267 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:09:11.599396   32267 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:09:11.599559   32267 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:09:11.599700   32267 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:09:11.599832   32267 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:09:11.684941   32267 ssh_runner.go:195] Run: systemctl --version
	I0915 07:09:11.692128   32267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:09:11.711253   32267 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:09:11.711286   32267 api_server.go:166] Checking apiserver status ...
	I0915 07:09:11.711318   32267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:09:11.727469   32267 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0915 07:09:11.737977   32267 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:09:11.738037   32267 ssh_runner.go:195] Run: ls
	I0915 07:09:11.743070   32267 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:09:11.747357   32267 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:09:11.747382   32267 status.go:422] ha-670527 apiserver status = Running (err=<nil>)
	I0915 07:09:11.747394   32267 status.go:257] ha-670527 status: &{Name:ha-670527 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:09:11.747428   32267 status.go:255] checking status of ha-670527-m02 ...
	I0915 07:09:11.747729   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:11.747761   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:11.762533   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40827
	I0915 07:09:11.762964   32267 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:11.763499   32267 main.go:141] libmachine: Using API Version  1
	I0915 07:09:11.763518   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:11.763864   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:11.764085   32267 main.go:141] libmachine: (ha-670527-m02) Calling .GetState
	I0915 07:09:11.766004   32267 status.go:330] ha-670527-m02 host status = "Stopped" (err=<nil>)
	I0915 07:09:11.766018   32267 status.go:343] host is not running, skipping remaining checks
	I0915 07:09:11.766025   32267 status.go:257] ha-670527-m02 status: &{Name:ha-670527-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:09:11.766044   32267 status.go:255] checking status of ha-670527-m03 ...
	I0915 07:09:11.766332   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:11.766376   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:11.780815   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36919
	I0915 07:09:11.781251   32267 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:11.781708   32267 main.go:141] libmachine: Using API Version  1
	I0915 07:09:11.781728   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:11.782020   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:11.782217   32267 main.go:141] libmachine: (ha-670527-m03) Calling .GetState
	I0915 07:09:11.783471   32267 status.go:330] ha-670527-m03 host status = "Running" (err=<nil>)
	I0915 07:09:11.783484   32267 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:09:11.783761   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:11.783790   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:11.798869   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43615
	I0915 07:09:11.799339   32267 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:11.799823   32267 main.go:141] libmachine: Using API Version  1
	I0915 07:09:11.799847   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:11.800115   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:11.800285   32267 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:09:11.802811   32267 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:09:11.803157   32267 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:09:11.803184   32267 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:09:11.803290   32267 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:09:11.803697   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:11.803733   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:11.818736   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38065
	I0915 07:09:11.819156   32267 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:11.819570   32267 main.go:141] libmachine: Using API Version  1
	I0915 07:09:11.819596   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:11.819917   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:11.820125   32267 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:09:11.820341   32267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:09:11.820359   32267 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:09:11.823018   32267 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:09:11.823455   32267 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:09:11.823476   32267 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:09:11.823614   32267 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:09:11.823763   32267 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:09:11.823881   32267 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:09:11.824018   32267 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:09:11.909486   32267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:09:11.925473   32267 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:09:11.925501   32267 api_server.go:166] Checking apiserver status ...
	I0915 07:09:11.925540   32267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:09:11.939342   32267 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W0915 07:09:11.949555   32267 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:09:11.949634   32267 ssh_runner.go:195] Run: ls
	I0915 07:09:11.954240   32267 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:09:11.958653   32267 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:09:11.958675   32267 status.go:422] ha-670527-m03 apiserver status = Running (err=<nil>)
	I0915 07:09:11.958683   32267 status.go:257] ha-670527-m03 status: &{Name:ha-670527-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:09:11.958698   32267 status.go:255] checking status of ha-670527-m04 ...
	I0915 07:09:11.958976   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:11.959012   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:11.974009   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I0915 07:09:11.974490   32267 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:11.975014   32267 main.go:141] libmachine: Using API Version  1
	I0915 07:09:11.975039   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:11.975322   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:11.975516   32267 main.go:141] libmachine: (ha-670527-m04) Calling .GetState
	I0915 07:09:11.977142   32267 status.go:330] ha-670527-m04 host status = "Running" (err=<nil>)
	I0915 07:09:11.977168   32267 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:09:11.977486   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:11.977520   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:11.992098   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39377
	I0915 07:09:11.992486   32267 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:11.992921   32267 main.go:141] libmachine: Using API Version  1
	I0915 07:09:11.992939   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:11.993236   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:11.993427   32267 main.go:141] libmachine: (ha-670527-m04) Calling .GetIP
	I0915 07:09:11.996227   32267 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:09:11.996633   32267 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:09:11.996652   32267 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:09:11.996795   32267 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:09:11.997088   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:11.997121   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:12.012311   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45047
	I0915 07:09:12.012757   32267 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:12.013217   32267 main.go:141] libmachine: Using API Version  1
	I0915 07:09:12.013237   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:12.013551   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:12.013767   32267 main.go:141] libmachine: (ha-670527-m04) Calling .DriverName
	I0915 07:09:12.013946   32267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:09:12.013966   32267 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHHostname
	I0915 07:09:12.016820   32267 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:09:12.017179   32267 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:09:12.017209   32267 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:09:12.017323   32267 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHPort
	I0915 07:09:12.017493   32267 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHKeyPath
	I0915 07:09:12.017620   32267 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHUsername
	I0915 07:09:12.017719   32267 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m04/id_rsa Username:docker}
	I0915 07:09:12.096606   32267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:09:12.111204   32267 status.go:257] ha-670527-m04 status: &{Name:ha-670527-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr: exit status 7 (609.334353ms)

                                                
                                                
-- stdout --
	ha-670527
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-670527-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:09:26.438521   32386 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:09:26.438635   32386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:09:26.438645   32386 out.go:358] Setting ErrFile to fd 2...
	I0915 07:09:26.438651   32386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:09:26.438814   32386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:09:26.438977   32386 out.go:352] Setting JSON to false
	I0915 07:09:26.439012   32386 mustload.go:65] Loading cluster: ha-670527
	I0915 07:09:26.439105   32386 notify.go:220] Checking for updates...
	I0915 07:09:26.439460   32386 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:09:26.439477   32386 status.go:255] checking status of ha-670527 ...
	I0915 07:09:26.439898   32386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:26.439967   32386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:26.459150   32386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39659
	I0915 07:09:26.459646   32386 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:26.460173   32386 main.go:141] libmachine: Using API Version  1
	I0915 07:09:26.460214   32386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:26.460549   32386 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:26.460765   32386 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:09:26.462442   32386 status.go:330] ha-670527 host status = "Running" (err=<nil>)
	I0915 07:09:26.462459   32386 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:09:26.462740   32386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:26.462781   32386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:26.476850   32386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0915 07:09:26.477283   32386 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:26.477709   32386 main.go:141] libmachine: Using API Version  1
	I0915 07:09:26.477729   32386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:26.478021   32386 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:26.478162   32386 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:09:26.481027   32386 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:09:26.481446   32386 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:09:26.481465   32386 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:09:26.481605   32386 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:09:26.481986   32386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:26.482032   32386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:26.496303   32386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
	I0915 07:09:26.496752   32386 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:26.497200   32386 main.go:141] libmachine: Using API Version  1
	I0915 07:09:26.497225   32386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:26.497554   32386 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:26.497704   32386 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:09:26.497889   32386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:09:26.497920   32386 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:09:26.500816   32386 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:09:26.501395   32386 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:09:26.501429   32386 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:09:26.501592   32386 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:09:26.501797   32386 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:09:26.501954   32386 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:09:26.502065   32386 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:09:26.594450   32386 ssh_runner.go:195] Run: systemctl --version
	I0915 07:09:26.600364   32386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:09:26.616840   32386 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:09:26.616869   32386 api_server.go:166] Checking apiserver status ...
	I0915 07:09:26.616895   32386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:09:26.630231   32386 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0915 07:09:26.639338   32386 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:09:26.639383   32386 ssh_runner.go:195] Run: ls
	I0915 07:09:26.643906   32386 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:09:26.648890   32386 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:09:26.648912   32386 status.go:422] ha-670527 apiserver status = Running (err=<nil>)
	I0915 07:09:26.648924   32386 status.go:257] ha-670527 status: &{Name:ha-670527 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:09:26.648956   32386 status.go:255] checking status of ha-670527-m02 ...
	I0915 07:09:26.649236   32386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:26.649276   32386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:26.663599   32386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I0915 07:09:26.664015   32386 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:26.664507   32386 main.go:141] libmachine: Using API Version  1
	I0915 07:09:26.664529   32386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:26.664818   32386 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:26.665002   32386 main.go:141] libmachine: (ha-670527-m02) Calling .GetState
	I0915 07:09:26.666428   32386 status.go:330] ha-670527-m02 host status = "Stopped" (err=<nil>)
	I0915 07:09:26.666445   32386 status.go:343] host is not running, skipping remaining checks
	I0915 07:09:26.666451   32386 status.go:257] ha-670527-m02 status: &{Name:ha-670527-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:09:26.666466   32386 status.go:255] checking status of ha-670527-m03 ...
	I0915 07:09:26.666735   32386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:26.666768   32386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:26.682052   32386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0915 07:09:26.682602   32386 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:26.683096   32386 main.go:141] libmachine: Using API Version  1
	I0915 07:09:26.683112   32386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:26.683411   32386 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:26.683570   32386 main.go:141] libmachine: (ha-670527-m03) Calling .GetState
	I0915 07:09:26.684894   32386 status.go:330] ha-670527-m03 host status = "Running" (err=<nil>)
	I0915 07:09:26.684910   32386 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:09:26.685238   32386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:26.685281   32386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:26.699577   32386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I0915 07:09:26.699926   32386 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:26.700585   32386 main.go:141] libmachine: Using API Version  1
	I0915 07:09:26.700606   32386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:26.700901   32386 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:26.701146   32386 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:09:26.704052   32386 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:09:26.704516   32386 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:09:26.704547   32386 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:09:26.704788   32386 host.go:66] Checking if "ha-670527-m03" exists ...
	I0915 07:09:26.705205   32386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:26.705246   32386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:26.719892   32386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32809
	I0915 07:09:26.720283   32386 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:26.720739   32386 main.go:141] libmachine: Using API Version  1
	I0915 07:09:26.720762   32386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:26.721080   32386 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:26.721285   32386 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:09:26.721458   32386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:09:26.721475   32386 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:09:26.724046   32386 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:09:26.724405   32386 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:09:26.724427   32386 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:09:26.724546   32386 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:09:26.724724   32386 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:09:26.724876   32386 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:09:26.724993   32386 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:09:26.805290   32386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:09:26.821146   32386 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:09:26.821169   32386 api_server.go:166] Checking apiserver status ...
	I0915 07:09:26.821209   32386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:09:26.834786   32386 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W0915 07:09:26.844220   32386 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:09:26.844315   32386 ssh_runner.go:195] Run: ls
	I0915 07:09:26.848922   32386 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:09:26.853080   32386 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:09:26.853097   32386 status.go:422] ha-670527-m03 apiserver status = Running (err=<nil>)
	I0915 07:09:26.853104   32386 status.go:257] ha-670527-m03 status: &{Name:ha-670527-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:09:26.853120   32386 status.go:255] checking status of ha-670527-m04 ...
	I0915 07:09:26.853442   32386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:26.853488   32386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:26.868389   32386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I0915 07:09:26.868798   32386 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:26.869333   32386 main.go:141] libmachine: Using API Version  1
	I0915 07:09:26.869367   32386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:26.869643   32386 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:26.869862   32386 main.go:141] libmachine: (ha-670527-m04) Calling .GetState
	I0915 07:09:26.871470   32386 status.go:330] ha-670527-m04 host status = "Running" (err=<nil>)
	I0915 07:09:26.871486   32386 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:09:26.871758   32386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:26.871798   32386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:26.886279   32386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I0915 07:09:26.886685   32386 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:26.887144   32386 main.go:141] libmachine: Using API Version  1
	I0915 07:09:26.887166   32386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:26.887463   32386 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:26.887636   32386 main.go:141] libmachine: (ha-670527-m04) Calling .GetIP
	I0915 07:09:26.890220   32386 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:09:26.890635   32386 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:09:26.890661   32386 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:09:26.890796   32386 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:09:26.891151   32386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:26.891194   32386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:26.905294   32386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38533
	I0915 07:09:26.905760   32386 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:26.906199   32386 main.go:141] libmachine: Using API Version  1
	I0915 07:09:26.906228   32386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:26.906607   32386 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:26.906767   32386 main.go:141] libmachine: (ha-670527-m04) Calling .DriverName
	I0915 07:09:26.906942   32386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:09:26.906959   32386 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHHostname
	I0915 07:09:26.909651   32386 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:09:26.910072   32386 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:09:26.910091   32386 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:09:26.910230   32386 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHPort
	I0915 07:09:26.910440   32386 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHKeyPath
	I0915 07:09:26.910573   32386 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHUsername
	I0915 07:09:26.910709   32386 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m04/id_rsa Username:docker}
	I0915 07:09:26.992828   32386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:09:27.006231   32386 status.go:257] ha-670527-m04 status: &{Name:ha-670527-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-670527 -n ha-670527
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-670527 logs -n 25: (1.478213533s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m03:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527:/home/docker/cp-test_ha-670527-m03_ha-670527.txt                       |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527 sudo cat                                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | /home/docker/cp-test_ha-670527-m03_ha-670527.txt                                 |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m03:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m02:/home/docker/cp-test_ha-670527-m03_ha-670527-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m02 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | /home/docker/cp-test_ha-670527-m03_ha-670527-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m03:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m04:/home/docker/cp-test_ha-670527-m03_ha-670527-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m04 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | /home/docker/cp-test_ha-670527-m03_ha-670527-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-670527 cp testdata/cp-test.txt                                                | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2302607583/001/cp-test_ha-670527-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527:/home/docker/cp-test_ha-670527-m04_ha-670527.txt                       |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527 sudo cat                                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-670527-m04_ha-670527.txt                                 |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m02:/home/docker/cp-test_ha-670527-m04_ha-670527-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m02 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-670527-m04_ha-670527-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m03:/home/docker/cp-test_ha-670527-m04_ha-670527-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m03 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-670527-m04_ha-670527-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-670527 node stop m02 -v=7                                                     | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-670527 node start m02 -v=7                                                    | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 07:01:22
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 07:01:22.338266   26835 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:01:22.338515   26835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:01:22.338525   26835 out.go:358] Setting ErrFile to fd 2...
	I0915 07:01:22.338532   26835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:01:22.338738   26835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:01:22.339316   26835 out.go:352] Setting JSON to false
	I0915 07:01:22.340214   26835 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2628,"bootTime":1726381054,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 07:01:22.340315   26835 start.go:139] virtualization: kvm guest
	I0915 07:01:22.342433   26835 out.go:177] * [ha-670527] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 07:01:22.343626   26835 notify.go:220] Checking for updates...
	I0915 07:01:22.343686   26835 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:01:22.344812   26835 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:01:22.346115   26835 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:01:22.347411   26835 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:01:22.348750   26835 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 07:01:22.349955   26835 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:01:22.351099   26835 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:01:22.384814   26835 out.go:177] * Using the kvm2 driver based on user configuration
	I0915 07:01:22.386050   26835 start.go:297] selected driver: kvm2
	I0915 07:01:22.386063   26835 start.go:901] validating driver "kvm2" against <nil>
	I0915 07:01:22.386074   26835 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:01:22.386776   26835 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:01:22.386846   26835 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 07:01:22.401115   26835 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 07:01:22.401164   26835 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 07:01:22.401477   26835 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:01:22.401519   26835 cni.go:84] Creating CNI manager for ""
	I0915 07:01:22.401575   26835 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0915 07:01:22.401585   26835 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0915 07:01:22.401663   26835 start.go:340] cluster config:
	{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0915 07:01:22.401928   26835 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:01:22.404211   26835 out.go:177] * Starting "ha-670527" primary control-plane node in "ha-670527" cluster
	I0915 07:01:22.405703   26835 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:01:22.405735   26835 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 07:01:22.405743   26835 cache.go:56] Caching tarball of preloaded images
	I0915 07:01:22.405833   26835 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:01:22.405846   26835 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:01:22.406152   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:01:22.406173   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json: {Name:mkf802eeadbffbfc049e41868d31a8e27df1da7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:22.406318   26835 start.go:360] acquireMachinesLock for ha-670527: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:01:22.406357   26835 start.go:364] duration metric: took 18.446µs to acquireMachinesLock for "ha-670527"
	I0915 07:01:22.406374   26835 start.go:93] Provisioning new machine with config: &{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:01:22.406438   26835 start.go:125] createHost starting for "" (driver="kvm2")
	I0915 07:01:22.408103   26835 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 07:01:22.408239   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:01:22.408284   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:01:22.422470   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0915 07:01:22.422913   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:01:22.423449   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:01:22.423469   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:01:22.423853   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:01:22.424040   26835 main.go:141] libmachine: (ha-670527) Calling .GetMachineName
	I0915 07:01:22.424172   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:22.424333   26835 start.go:159] libmachine.API.Create for "ha-670527" (driver="kvm2")
	I0915 07:01:22.424365   26835 client.go:168] LocalClient.Create starting
	I0915 07:01:22.424409   26835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem
	I0915 07:01:22.424444   26835 main.go:141] libmachine: Decoding PEM data...
	I0915 07:01:22.424460   26835 main.go:141] libmachine: Parsing certificate...
	I0915 07:01:22.424514   26835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem
	I0915 07:01:22.424531   26835 main.go:141] libmachine: Decoding PEM data...
	I0915 07:01:22.424544   26835 main.go:141] libmachine: Parsing certificate...
	I0915 07:01:22.424556   26835 main.go:141] libmachine: Running pre-create checks...
	I0915 07:01:22.424568   26835 main.go:141] libmachine: (ha-670527) Calling .PreCreateCheck
	I0915 07:01:22.424948   26835 main.go:141] libmachine: (ha-670527) Calling .GetConfigRaw
	I0915 07:01:22.425340   26835 main.go:141] libmachine: Creating machine...
	I0915 07:01:22.425353   26835 main.go:141] libmachine: (ha-670527) Calling .Create
	I0915 07:01:22.425518   26835 main.go:141] libmachine: (ha-670527) Creating KVM machine...
	I0915 07:01:22.426896   26835 main.go:141] libmachine: (ha-670527) DBG | found existing default KVM network
	I0915 07:01:22.427571   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:22.427424   26858 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0915 07:01:22.427618   26835 main.go:141] libmachine: (ha-670527) DBG | created network xml: 
	I0915 07:01:22.427640   26835 main.go:141] libmachine: (ha-670527) DBG | <network>
	I0915 07:01:22.427654   26835 main.go:141] libmachine: (ha-670527) DBG |   <name>mk-ha-670527</name>
	I0915 07:01:22.427664   26835 main.go:141] libmachine: (ha-670527) DBG |   <dns enable='no'/>
	I0915 07:01:22.427687   26835 main.go:141] libmachine: (ha-670527) DBG |   
	I0915 07:01:22.427700   26835 main.go:141] libmachine: (ha-670527) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0915 07:01:22.427707   26835 main.go:141] libmachine: (ha-670527) DBG |     <dhcp>
	I0915 07:01:22.427718   26835 main.go:141] libmachine: (ha-670527) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0915 07:01:22.427726   26835 main.go:141] libmachine: (ha-670527) DBG |     </dhcp>
	I0915 07:01:22.427739   26835 main.go:141] libmachine: (ha-670527) DBG |   </ip>
	I0915 07:01:22.427747   26835 main.go:141] libmachine: (ha-670527) DBG |   
	I0915 07:01:22.427752   26835 main.go:141] libmachine: (ha-670527) DBG | </network>
	I0915 07:01:22.427764   26835 main.go:141] libmachine: (ha-670527) DBG | 
	I0915 07:01:22.432551   26835 main.go:141] libmachine: (ha-670527) DBG | trying to create private KVM network mk-ha-670527 192.168.39.0/24...
	I0915 07:01:22.495364   26835 main.go:141] libmachine: (ha-670527) DBG | private KVM network mk-ha-670527 192.168.39.0/24 created
	I0915 07:01:22.495398   26835 main.go:141] libmachine: (ha-670527) Setting up store path in /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527 ...
	I0915 07:01:22.495423   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:22.495344   26858 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:01:22.495440   26835 main.go:141] libmachine: (ha-670527) Building disk image from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso
	I0915 07:01:22.495519   26835 main.go:141] libmachine: (ha-670527) Downloading /home/jenkins/minikube-integration/19644-6166/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso...
	I0915 07:01:22.742568   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:22.742430   26858 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa...
	I0915 07:01:22.978699   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:22.978563   26858 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/ha-670527.rawdisk...
	I0915 07:01:22.978729   26835 main.go:141] libmachine: (ha-670527) DBG | Writing magic tar header
	I0915 07:01:22.978738   26835 main.go:141] libmachine: (ha-670527) DBG | Writing SSH key tar header
	I0915 07:01:22.978745   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:22.978695   26858 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527 ...
	I0915 07:01:22.978895   26835 main.go:141] libmachine: (ha-670527) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527 (perms=drwx------)
	I0915 07:01:22.978922   26835 main.go:141] libmachine: (ha-670527) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527
	I0915 07:01:22.978933   26835 main.go:141] libmachine: (ha-670527) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines (perms=drwxr-xr-x)
	I0915 07:01:22.978949   26835 main.go:141] libmachine: (ha-670527) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube (perms=drwxr-xr-x)
	I0915 07:01:22.978975   26835 main.go:141] libmachine: (ha-670527) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines
	I0915 07:01:22.978987   26835 main.go:141] libmachine: (ha-670527) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166 (perms=drwxrwxr-x)
	I0915 07:01:22.978994   26835 main.go:141] libmachine: (ha-670527) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0915 07:01:22.979002   26835 main.go:141] libmachine: (ha-670527) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0915 07:01:22.979011   26835 main.go:141] libmachine: (ha-670527) Creating domain...
	I0915 07:01:22.979025   26835 main.go:141] libmachine: (ha-670527) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:01:22.979041   26835 main.go:141] libmachine: (ha-670527) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166
	I0915 07:01:22.979057   26835 main.go:141] libmachine: (ha-670527) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0915 07:01:22.979068   26835 main.go:141] libmachine: (ha-670527) DBG | Checking permissions on dir: /home/jenkins
	I0915 07:01:22.979079   26835 main.go:141] libmachine: (ha-670527) DBG | Checking permissions on dir: /home
	I0915 07:01:22.979090   26835 main.go:141] libmachine: (ha-670527) DBG | Skipping /home - not owner
	I0915 07:01:22.980081   26835 main.go:141] libmachine: (ha-670527) define libvirt domain using xml: 
	I0915 07:01:22.980126   26835 main.go:141] libmachine: (ha-670527) <domain type='kvm'>
	I0915 07:01:22.980136   26835 main.go:141] libmachine: (ha-670527)   <name>ha-670527</name>
	I0915 07:01:22.980143   26835 main.go:141] libmachine: (ha-670527)   <memory unit='MiB'>2200</memory>
	I0915 07:01:22.980148   26835 main.go:141] libmachine: (ha-670527)   <vcpu>2</vcpu>
	I0915 07:01:22.980154   26835 main.go:141] libmachine: (ha-670527)   <features>
	I0915 07:01:22.980159   26835 main.go:141] libmachine: (ha-670527)     <acpi/>
	I0915 07:01:22.980166   26835 main.go:141] libmachine: (ha-670527)     <apic/>
	I0915 07:01:22.980171   26835 main.go:141] libmachine: (ha-670527)     <pae/>
	I0915 07:01:22.980180   26835 main.go:141] libmachine: (ha-670527)     
	I0915 07:01:22.980186   26835 main.go:141] libmachine: (ha-670527)   </features>
	I0915 07:01:22.980191   26835 main.go:141] libmachine: (ha-670527)   <cpu mode='host-passthrough'>
	I0915 07:01:22.980198   26835 main.go:141] libmachine: (ha-670527)   
	I0915 07:01:22.980202   26835 main.go:141] libmachine: (ha-670527)   </cpu>
	I0915 07:01:22.980206   26835 main.go:141] libmachine: (ha-670527)   <os>
	I0915 07:01:22.980210   26835 main.go:141] libmachine: (ha-670527)     <type>hvm</type>
	I0915 07:01:22.980214   26835 main.go:141] libmachine: (ha-670527)     <boot dev='cdrom'/>
	I0915 07:01:22.980220   26835 main.go:141] libmachine: (ha-670527)     <boot dev='hd'/>
	I0915 07:01:22.980224   26835 main.go:141] libmachine: (ha-670527)     <bootmenu enable='no'/>
	I0915 07:01:22.980230   26835 main.go:141] libmachine: (ha-670527)   </os>
	I0915 07:01:22.980234   26835 main.go:141] libmachine: (ha-670527)   <devices>
	I0915 07:01:22.980240   26835 main.go:141] libmachine: (ha-670527)     <disk type='file' device='cdrom'>
	I0915 07:01:22.980266   26835 main.go:141] libmachine: (ha-670527)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/boot2docker.iso'/>
	I0915 07:01:22.980293   26835 main.go:141] libmachine: (ha-670527)       <target dev='hdc' bus='scsi'/>
	I0915 07:01:22.980315   26835 main.go:141] libmachine: (ha-670527)       <readonly/>
	I0915 07:01:22.980335   26835 main.go:141] libmachine: (ha-670527)     </disk>
	I0915 07:01:22.980359   26835 main.go:141] libmachine: (ha-670527)     <disk type='file' device='disk'>
	I0915 07:01:22.980379   26835 main.go:141] libmachine: (ha-670527)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0915 07:01:22.980394   26835 main.go:141] libmachine: (ha-670527)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/ha-670527.rawdisk'/>
	I0915 07:01:22.980418   26835 main.go:141] libmachine: (ha-670527)       <target dev='hda' bus='virtio'/>
	I0915 07:01:22.980430   26835 main.go:141] libmachine: (ha-670527)     </disk>
	I0915 07:01:22.980439   26835 main.go:141] libmachine: (ha-670527)     <interface type='network'>
	I0915 07:01:22.980451   26835 main.go:141] libmachine: (ha-670527)       <source network='mk-ha-670527'/>
	I0915 07:01:22.980460   26835 main.go:141] libmachine: (ha-670527)       <model type='virtio'/>
	I0915 07:01:22.980468   26835 main.go:141] libmachine: (ha-670527)     </interface>
	I0915 07:01:22.980477   26835 main.go:141] libmachine: (ha-670527)     <interface type='network'>
	I0915 07:01:22.980484   26835 main.go:141] libmachine: (ha-670527)       <source network='default'/>
	I0915 07:01:22.980516   26835 main.go:141] libmachine: (ha-670527)       <model type='virtio'/>
	I0915 07:01:22.980533   26835 main.go:141] libmachine: (ha-670527)     </interface>
	I0915 07:01:22.980545   26835 main.go:141] libmachine: (ha-670527)     <serial type='pty'>
	I0915 07:01:22.980554   26835 main.go:141] libmachine: (ha-670527)       <target port='0'/>
	I0915 07:01:22.980562   26835 main.go:141] libmachine: (ha-670527)     </serial>
	I0915 07:01:22.980575   26835 main.go:141] libmachine: (ha-670527)     <console type='pty'>
	I0915 07:01:22.980590   26835 main.go:141] libmachine: (ha-670527)       <target type='serial' port='0'/>
	I0915 07:01:22.980603   26835 main.go:141] libmachine: (ha-670527)     </console>
	I0915 07:01:22.980615   26835 main.go:141] libmachine: (ha-670527)     <rng model='virtio'>
	I0915 07:01:22.980626   26835 main.go:141] libmachine: (ha-670527)       <backend model='random'>/dev/random</backend>
	I0915 07:01:22.980636   26835 main.go:141] libmachine: (ha-670527)     </rng>
	I0915 07:01:22.980641   26835 main.go:141] libmachine: (ha-670527)     
	I0915 07:01:22.980653   26835 main.go:141] libmachine: (ha-670527)     
	I0915 07:01:22.980668   26835 main.go:141] libmachine: (ha-670527)   </devices>
	I0915 07:01:22.980677   26835 main.go:141] libmachine: (ha-670527) </domain>
	I0915 07:01:22.980681   26835 main.go:141] libmachine: (ha-670527) 
	I0915 07:01:22.984907   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:0b:b1:eb in network default
	I0915 07:01:22.985523   26835 main.go:141] libmachine: (ha-670527) Ensuring networks are active...
	I0915 07:01:22.985551   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:22.986143   26835 main.go:141] libmachine: (ha-670527) Ensuring network default is active
	I0915 07:01:22.986388   26835 main.go:141] libmachine: (ha-670527) Ensuring network mk-ha-670527 is active
	I0915 07:01:22.986851   26835 main.go:141] libmachine: (ha-670527) Getting domain xml...
	I0915 07:01:22.987441   26835 main.go:141] libmachine: (ha-670527) Creating domain...
	I0915 07:01:24.166128   26835 main.go:141] libmachine: (ha-670527) Waiting to get IP...
	I0915 07:01:24.166896   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:24.167250   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:24.167284   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:24.167233   26858 retry.go:31] will retry after 188.706653ms: waiting for machine to come up
	I0915 07:01:24.357578   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:24.358062   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:24.358210   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:24.358012   26858 retry.go:31] will retry after 260.220734ms: waiting for machine to come up
	I0915 07:01:24.619321   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:24.619779   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:24.619799   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:24.619736   26858 retry.go:31] will retry after 363.224901ms: waiting for machine to come up
	I0915 07:01:24.984128   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:24.984569   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:24.984613   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:24.984532   26858 retry.go:31] will retry after 535.952621ms: waiting for machine to come up
	I0915 07:01:25.522277   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:25.522767   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:25.522795   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:25.522719   26858 retry.go:31] will retry after 645.876747ms: waiting for machine to come up
	I0915 07:01:26.170487   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:26.170857   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:26.170900   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:26.170818   26858 retry.go:31] will retry after 846.64448ms: waiting for machine to come up
	I0915 07:01:27.018803   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:27.019226   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:27.019268   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:27.019128   26858 retry.go:31] will retry after 1.180309168s: waiting for machine to come up
	I0915 07:01:28.200567   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:28.201022   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:28.201053   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:28.200967   26858 retry.go:31] will retry after 988.422962ms: waiting for machine to come up
	I0915 07:01:29.191077   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:29.191473   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:29.191495   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:29.191434   26858 retry.go:31] will retry after 1.502324093s: waiting for machine to come up
	I0915 07:01:30.696077   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:30.696438   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:30.696459   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:30.696405   26858 retry.go:31] will retry after 1.467846046s: waiting for machine to come up
	I0915 07:01:32.166170   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:32.166717   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:32.166748   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:32.166644   26858 retry.go:31] will retry after 1.903254759s: waiting for machine to come up
	I0915 07:01:34.071613   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:34.072111   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:34.072132   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:34.072065   26858 retry.go:31] will retry after 2.570486979s: waiting for machine to come up
	I0915 07:01:36.645795   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:36.646237   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:36.646252   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:36.646204   26858 retry.go:31] will retry after 3.887633246s: waiting for machine to come up
	I0915 07:01:40.537825   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:40.538226   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find current IP address of domain ha-670527 in network mk-ha-670527
	I0915 07:01:40.538256   26835 main.go:141] libmachine: (ha-670527) DBG | I0915 07:01:40.538205   26858 retry.go:31] will retry after 4.090180911s: waiting for machine to come up
	I0915 07:01:44.630705   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.631138   26835 main.go:141] libmachine: (ha-670527) Found IP for machine: 192.168.39.54
	I0915 07:01:44.631163   26835 main.go:141] libmachine: (ha-670527) Reserving static IP address...
	I0915 07:01:44.631176   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has current primary IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.631529   26835 main.go:141] libmachine: (ha-670527) DBG | unable to find host DHCP lease matching {name: "ha-670527", mac: "52:54:00:c3:49:88", ip: "192.168.39.54"} in network mk-ha-670527
	I0915 07:01:44.700421   26835 main.go:141] libmachine: (ha-670527) DBG | Getting to WaitForSSH function...
	I0915 07:01:44.700452   26835 main.go:141] libmachine: (ha-670527) Reserved static IP address: 192.168.39.54
	I0915 07:01:44.700463   26835 main.go:141] libmachine: (ha-670527) Waiting for SSH to be available...
	I0915 07:01:44.702979   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.703360   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:44.703397   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.703558   26835 main.go:141] libmachine: (ha-670527) DBG | Using SSH client type: external
	I0915 07:01:44.703586   26835 main.go:141] libmachine: (ha-670527) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa (-rw-------)
	I0915 07:01:44.703613   26835 main.go:141] libmachine: (ha-670527) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 07:01:44.703624   26835 main.go:141] libmachine: (ha-670527) DBG | About to run SSH command:
	I0915 07:01:44.703638   26835 main.go:141] libmachine: (ha-670527) DBG | exit 0
	I0915 07:01:44.829685   26835 main.go:141] libmachine: (ha-670527) DBG | SSH cmd err, output: <nil>: 
	I0915 07:01:44.829964   26835 main.go:141] libmachine: (ha-670527) KVM machine creation complete!
	I0915 07:01:44.830281   26835 main.go:141] libmachine: (ha-670527) Calling .GetConfigRaw
	I0915 07:01:44.830895   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:44.831117   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:44.831314   26835 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0915 07:01:44.831329   26835 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:01:44.832678   26835 main.go:141] libmachine: Detecting operating system of created instance...
	I0915 07:01:44.832699   26835 main.go:141] libmachine: Waiting for SSH to be available...
	I0915 07:01:44.832708   26835 main.go:141] libmachine: Getting to WaitForSSH function...
	I0915 07:01:44.832718   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:44.835692   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.836060   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:44.836098   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.836208   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:44.836378   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:44.836526   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:44.836642   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:44.836772   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:01:44.836986   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:01:44.836997   26835 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0915 07:01:44.945058   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:01:44.945078   26835 main.go:141] libmachine: Detecting the provisioner...
	I0915 07:01:44.945085   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:44.947793   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.948119   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:44.948141   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:44.948290   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:44.948484   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:44.948613   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:44.948721   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:44.948831   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:01:44.948990   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:01:44.949001   26835 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0915 07:01:45.058744   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0915 07:01:45.058841   26835 main.go:141] libmachine: found compatible host: buildroot
	I0915 07:01:45.058850   26835 main.go:141] libmachine: Provisioning with buildroot...
	I0915 07:01:45.058857   26835 main.go:141] libmachine: (ha-670527) Calling .GetMachineName
	I0915 07:01:45.059094   26835 buildroot.go:166] provisioning hostname "ha-670527"
	I0915 07:01:45.059126   26835 main.go:141] libmachine: (ha-670527) Calling .GetMachineName
	I0915 07:01:45.059297   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:45.061876   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.062229   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.062258   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.062348   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:45.062511   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.062614   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.062786   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:45.062927   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:01:45.063089   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:01:45.063100   26835 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-670527 && echo "ha-670527" | sudo tee /etc/hostname
	I0915 07:01:45.183848   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-670527
	
	I0915 07:01:45.183886   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:45.186544   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.186873   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.186896   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.187091   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:45.187253   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.187406   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.187536   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:45.187697   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:01:45.187915   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:01:45.187935   26835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-670527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-670527/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-670527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:01:45.302480   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:01:45.302530   26835 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 07:01:45.302576   26835 buildroot.go:174] setting up certificates
	I0915 07:01:45.302592   26835 provision.go:84] configureAuth start
	I0915 07:01:45.302605   26835 main.go:141] libmachine: (ha-670527) Calling .GetMachineName
	I0915 07:01:45.302895   26835 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:01:45.305295   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.305594   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.305617   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.305748   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:45.307612   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.307902   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.307932   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.308072   26835 provision.go:143] copyHostCerts
	I0915 07:01:45.308104   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:01:45.308140   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 07:01:45.308148   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:01:45.308215   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 07:01:45.308307   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:01:45.308325   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 07:01:45.308332   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:01:45.308379   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 07:01:45.308484   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:01:45.308508   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 07:01:45.308517   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:01:45.308552   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 07:01:45.308627   26835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.ha-670527 san=[127.0.0.1 192.168.39.54 ha-670527 localhost minikube]
	I0915 07:01:45.491639   26835 provision.go:177] copyRemoteCerts
	I0915 07:01:45.491698   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:01:45.491720   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:45.494361   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.494658   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.494685   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.494797   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:45.495000   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.495146   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:45.495278   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:01:45.579874   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0915 07:01:45.579950   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:01:45.604185   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0915 07:01:45.604260   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0915 07:01:45.628022   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0915 07:01:45.628090   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 07:01:45.651817   26835 provision.go:87] duration metric: took 349.209152ms to configureAuth
	I0915 07:01:45.651847   26835 buildroot.go:189] setting minikube options for container-runtime
	I0915 07:01:45.652034   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:01:45.652159   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:45.655043   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.655378   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.655405   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.655617   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:45.655762   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.655915   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.656063   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:45.656217   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:01:45.656384   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:01:45.656399   26835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:01:45.887505   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:01:45.887532   26835 main.go:141] libmachine: Checking connection to Docker...
	I0915 07:01:45.887542   26835 main.go:141] libmachine: (ha-670527) Calling .GetURL
	I0915 07:01:45.888872   26835 main.go:141] libmachine: (ha-670527) DBG | Using libvirt version 6000000
	I0915 07:01:45.891428   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.891766   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.891793   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.891941   26835 main.go:141] libmachine: Docker is up and running!
	I0915 07:01:45.891966   26835 main.go:141] libmachine: Reticulating splines...
	I0915 07:01:45.891976   26835 client.go:171] duration metric: took 23.467602141s to LocalClient.Create
	I0915 07:01:45.891999   26835 start.go:167] duration metric: took 23.467666954s to libmachine.API.Create "ha-670527"
	I0915 07:01:45.892007   26835 start.go:293] postStartSetup for "ha-670527" (driver="kvm2")
	I0915 07:01:45.892016   26835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:01:45.892032   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:45.892235   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:01:45.892256   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:45.894291   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.894576   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:45.894599   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:45.894739   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:45.894920   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:45.895026   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:45.895125   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:01:45.980573   26835 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:01:45.985151   26835 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 07:01:45.985189   26835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 07:01:45.985246   26835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 07:01:45.985325   26835 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 07:01:45.985335   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /etc/ssl/certs/131902.pem
	I0915 07:01:45.985421   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:01:45.995087   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:01:46.019331   26835 start.go:296] duration metric: took 127.309643ms for postStartSetup
	I0915 07:01:46.019392   26835 main.go:141] libmachine: (ha-670527) Calling .GetConfigRaw
	I0915 07:01:46.019946   26835 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:01:46.022538   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.022832   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:46.022860   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.023068   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:01:46.023248   26835 start.go:128] duration metric: took 23.616801339s to createHost
	I0915 07:01:46.023275   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:46.025196   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.025484   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:46.025508   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.025641   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:46.025840   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:46.025978   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:46.026139   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:46.026267   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:01:46.026478   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:01:46.026498   26835 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 07:01:46.134541   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726383706.112000470
	
	I0915 07:01:46.134569   26835 fix.go:216] guest clock: 1726383706.112000470
	I0915 07:01:46.134580   26835 fix.go:229] Guest: 2024-09-15 07:01:46.11200047 +0000 UTC Remote: 2024-09-15 07:01:46.023265524 +0000 UTC m=+23.718631124 (delta=88.734946ms)
	I0915 07:01:46.134604   26835 fix.go:200] guest clock delta is within tolerance: 88.734946ms
	I0915 07:01:46.134609   26835 start.go:83] releasing machines lock for "ha-670527", held for 23.728244309s
	I0915 07:01:46.134635   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:46.134884   26835 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:01:46.137240   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.137654   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:46.137678   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.137879   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:46.138482   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:46.138646   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:01:46.138754   26835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:01:46.138801   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:46.138868   26835 ssh_runner.go:195] Run: cat /version.json
	I0915 07:01:46.138890   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:01:46.141285   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.141474   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.141599   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:46.141626   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.141742   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:46.141837   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:46.141864   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:46.141923   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:46.141984   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:01:46.142063   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:46.142179   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:01:46.142177   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:01:46.142326   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:01:46.142467   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:01:46.218952   26835 ssh_runner.go:195] Run: systemctl --version
	I0915 07:01:46.246916   26835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:01:46.409291   26835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 07:01:46.415192   26835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 07:01:46.415272   26835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:01:46.432003   26835 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 07:01:46.432030   26835 start.go:495] detecting cgroup driver to use...
	I0915 07:01:46.432101   26835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:01:46.448723   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:01:46.462830   26835 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:01:46.462893   26835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:01:46.476505   26835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:01:46.490557   26835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:01:46.602542   26835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:01:46.739310   26835 docker.go:233] disabling docker service ...
	I0915 07:01:46.739370   26835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:01:46.753843   26835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:01:46.766903   26835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:01:46.898044   26835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:01:47.030704   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:01:47.050656   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:01:47.071949   26835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:01:47.072007   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:01:47.082479   26835 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:01:47.082549   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:01:47.092957   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:01:47.103025   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:01:47.113313   26835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:01:47.123742   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:01:47.134057   26835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:01:47.151062   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:01:47.161590   26835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:01:47.170543   26835 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0915 07:01:47.170591   26835 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0915 07:01:47.182597   26835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:01:47.192398   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:01:47.324081   26835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:01:47.420878   26835 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:01:47.420959   26835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:01:47.425497   26835 start.go:563] Will wait 60s for crictl version
	I0915 07:01:47.425545   26835 ssh_runner.go:195] Run: which crictl
	I0915 07:01:47.429322   26835 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:01:47.467220   26835 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 07:01:47.467299   26835 ssh_runner.go:195] Run: crio --version
	I0915 07:01:47.495752   26835 ssh_runner.go:195] Run: crio --version
	I0915 07:01:47.524642   26835 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 07:01:47.525898   26835 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:01:47.528463   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:47.528841   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:01:47.528868   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:01:47.529092   26835 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0915 07:01:47.533285   26835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:01:47.546197   26835 kubeadm.go:883] updating cluster {Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 07:01:47.546295   26835 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:01:47.546333   26835 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:01:47.576923   26835 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0915 07:01:47.576988   26835 ssh_runner.go:195] Run: which lz4
	I0915 07:01:47.580864   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0915 07:01:47.580971   26835 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0915 07:01:47.585030   26835 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0915 07:01:47.585056   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0915 07:01:48.943403   26835 crio.go:462] duration metric: took 1.362463597s to copy over tarball
	I0915 07:01:48.943469   26835 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0915 07:01:50.907239   26835 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.963740546s)
	I0915 07:01:50.907276   26835 crio.go:469] duration metric: took 1.963847523s to extract the tarball
	I0915 07:01:50.907286   26835 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0915 07:01:50.944640   26835 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:01:50.989423   26835 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 07:01:50.989449   26835 cache_images.go:84] Images are preloaded, skipping loading
	I0915 07:01:50.989457   26835 kubeadm.go:934] updating node { 192.168.39.54 8443 v1.31.1 crio true true} ...
	I0915 07:01:50.989586   26835 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-670527 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:01:50.989679   26835 ssh_runner.go:195] Run: crio config
	I0915 07:01:51.036536   26835 cni.go:84] Creating CNI manager for ""
	I0915 07:01:51.036560   26835 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0915 07:01:51.036576   26835 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 07:01:51.036605   26835 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.54 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-670527 NodeName:ha-670527 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 07:01:51.036776   26835 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-670527"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 07:01:51.036805   26835 kube-vip.go:115] generating kube-vip config ...
	I0915 07:01:51.036850   26835 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0915 07:01:51.052905   26835 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0915 07:01:51.053025   26835 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0915 07:01:51.053089   26835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 07:01:51.063199   26835 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 07:01:51.063273   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0915 07:01:51.072823   26835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0915 07:01:51.088848   26835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:01:51.104303   26835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0915 07:01:51.120544   26835 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0915 07:01:51.136574   26835 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0915 07:01:51.140343   26835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:01:51.152528   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:01:51.265401   26835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:01:51.281724   26835 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527 for IP: 192.168.39.54
	I0915 07:01:51.281749   26835 certs.go:194] generating shared ca certs ...
	I0915 07:01:51.281769   26835 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:51.281940   26835 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 07:01:51.281983   26835 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 07:01:51.281995   26835 certs.go:256] generating profile certs ...
	I0915 07:01:51.282050   26835 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key
	I0915 07:01:51.282070   26835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.crt with IP's: []
	I0915 07:01:51.401304   26835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.crt ...
	I0915 07:01:51.401332   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.crt: {Name:mka5690a76d05395db0946261ac3997a291081b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:51.401517   26835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key ...
	I0915 07:01:51.401538   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key: {Name:mkd1b6294a065842e208ffc8dee320a135e903bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:51.401642   26835 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.d6a173c9
	I0915 07:01:51.401662   26835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.d6a173c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.254]
	I0915 07:01:51.497958   26835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.d6a173c9 ...
	I0915 07:01:51.497984   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.d6a173c9: {Name:mkb63f9e00b6807ec3effb048bb09c3cb258c80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:51.498180   26835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.d6a173c9 ...
	I0915 07:01:51.498198   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.d6a173c9: {Name:mk1c9961994945d680cbfecfc61b9b26bd523332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:51.498333   26835 certs.go:381] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.d6a173c9 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt
	I0915 07:01:51.498424   26835 certs.go:385] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.d6a173c9 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key
	I0915 07:01:51.498479   26835 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key
	I0915 07:01:51.498495   26835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt with IP's: []
	I0915 07:01:51.619316   26835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt ...
	I0915 07:01:51.619354   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt: {Name:mk8e8b1dc5f4806199580985192f13865ad9631a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:51.619537   26835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key ...
	I0915 07:01:51.619550   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key: {Name:mk692b18ee8d7ed5ffa7b264e65e02a13aab4bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:01:51.619647   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0915 07:01:51.619668   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0915 07:01:51.619679   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0915 07:01:51.619694   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0915 07:01:51.619707   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0915 07:01:51.619720   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0915 07:01:51.619732   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0915 07:01:51.619745   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0915 07:01:51.619799   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 07:01:51.619841   26835 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 07:01:51.619851   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:01:51.619871   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:01:51.619914   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:01:51.619946   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 07:01:51.619988   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:01:51.620021   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /usr/share/ca-certificates/131902.pem
	I0915 07:01:51.620035   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:01:51.620045   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem -> /usr/share/ca-certificates/13190.pem
	I0915 07:01:51.620592   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:01:51.646496   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:01:51.670850   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:01:51.697503   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 07:01:51.723715   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0915 07:01:51.749604   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 07:01:51.775182   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 07:01:51.798076   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 07:01:51.820746   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 07:01:51.845308   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:01:51.871272   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 07:01:51.897269   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 07:01:51.916589   26835 ssh_runner.go:195] Run: openssl version
	I0915 07:01:51.922558   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 07:01:51.934005   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 07:01:51.938769   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 07:01:51.938820   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 07:01:51.944885   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 07:01:51.957963   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 07:01:51.969571   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 07:01:51.974259   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 07:01:51.974315   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 07:01:51.979983   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:01:51.991261   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:01:52.003094   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:01:52.007600   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:01:52.007659   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:01:52.013212   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:01:52.024389   26835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:01:52.028284   26835 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 07:01:52.028335   26835 kubeadm.go:392] StartCluster: {Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:01:52.028395   26835 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 07:01:52.028458   26835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 07:01:52.068839   26835 cri.go:89] found id: ""
	I0915 07:01:52.068901   26835 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 07:01:52.081684   26835 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 07:01:52.093797   26835 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 07:01:52.109244   26835 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 07:01:52.109265   26835 kubeadm.go:157] found existing configuration files:
	
	I0915 07:01:52.109309   26835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 07:01:52.119100   26835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 07:01:52.119162   26835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 07:01:52.129010   26835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 07:01:52.138382   26835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 07:01:52.138443   26835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 07:01:52.147811   26835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 07:01:52.156879   26835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 07:01:52.156922   26835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 07:01:52.166267   26835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 07:01:52.175241   26835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 07:01:52.175287   26835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 07:01:52.184586   26835 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 07:01:52.293898   26835 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 07:01:52.294087   26835 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 07:01:52.391078   26835 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 07:01:52.391223   26835 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 07:01:52.391362   26835 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 07:01:52.401134   26835 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 07:01:52.403461   26835 out.go:235]   - Generating certificates and keys ...
	I0915 07:01:52.404736   26835 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 07:01:52.404828   26835 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 07:01:52.769208   26835 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 07:01:52.890893   26835 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 07:01:53.106013   26835 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 07:01:53.212284   26835 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 07:01:53.427702   26835 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 07:01:53.427959   26835 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-670527 localhost] and IPs [192.168.39.54 127.0.0.1 ::1]
	I0915 07:01:53.492094   26835 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 07:01:53.492266   26835 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-670527 localhost] and IPs [192.168.39.54 127.0.0.1 ::1]
	I0915 07:01:53.648978   26835 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 07:01:53.712245   26835 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 07:01:53.783010   26835 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 07:01:53.783253   26835 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 07:01:54.269687   26835 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 07:01:54.413559   26835 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 07:01:54.606535   26835 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 07:01:54.768289   26835 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 07:01:54.881907   26835 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 07:01:54.882517   26835 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 07:01:54.885516   26835 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 07:01:54.887699   26835 out.go:235]   - Booting up control plane ...
	I0915 07:01:54.887822   26835 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 07:01:54.887927   26835 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 07:01:54.888028   26835 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 07:01:54.904806   26835 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 07:01:54.910929   26835 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 07:01:54.910987   26835 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 07:01:55.044359   26835 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 07:01:55.044554   26835 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 07:01:56.048040   26835 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002197719s
	I0915 07:01:56.048186   26835 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 07:02:01.698547   26835 kubeadm.go:310] [api-check] The API server is healthy after 5.653915359s
	I0915 07:02:01.712760   26835 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 07:02:01.726582   26835 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 07:02:01.762909   26835 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 07:02:01.763158   26835 kubeadm.go:310] [mark-control-plane] Marking the node ha-670527 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 07:02:01.775624   26835 kubeadm.go:310] [bootstrap-token] Using token: qqoe14.538zsiy1hqi1fmmp
	I0915 07:02:01.777066   26835 out.go:235]   - Configuring RBAC rules ...
	I0915 07:02:01.777189   26835 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 07:02:01.785613   26835 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 07:02:01.796346   26835 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 07:02:01.799830   26835 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 07:02:01.803486   26835 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 07:02:01.809344   26835 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 07:02:02.106575   26835 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 07:02:02.529953   26835 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 07:02:03.103824   26835 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 07:02:03.103851   26835 kubeadm.go:310] 
	I0915 07:02:03.103902   26835 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 07:02:03.103911   26835 kubeadm.go:310] 
	I0915 07:02:03.104041   26835 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 07:02:03.104065   26835 kubeadm.go:310] 
	I0915 07:02:03.104109   26835 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 07:02:03.104191   26835 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 07:02:03.104258   26835 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 07:02:03.104267   26835 kubeadm.go:310] 
	I0915 07:02:03.104340   26835 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 07:02:03.104350   26835 kubeadm.go:310] 
	I0915 07:02:03.104425   26835 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 07:02:03.104434   26835 kubeadm.go:310] 
	I0915 07:02:03.104501   26835 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 07:02:03.104598   26835 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 07:02:03.104705   26835 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 07:02:03.104720   26835 kubeadm.go:310] 
	I0915 07:02:03.104819   26835 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 07:02:03.104934   26835 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 07:02:03.104948   26835 kubeadm.go:310] 
	I0915 07:02:03.105060   26835 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qqoe14.538zsiy1hqi1fmmp \
	I0915 07:02:03.105198   26835 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b \
	I0915 07:02:03.105239   26835 kubeadm.go:310] 	--control-plane 
	I0915 07:02:03.105253   26835 kubeadm.go:310] 
	I0915 07:02:03.105385   26835 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 07:02:03.105396   26835 kubeadm.go:310] 
	I0915 07:02:03.105511   26835 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qqoe14.538zsiy1hqi1fmmp \
	I0915 07:02:03.105650   26835 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b 
	I0915 07:02:03.106268   26835 kubeadm.go:310] W0915 07:01:52.272705     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 07:02:03.106674   26835 kubeadm.go:310] W0915 07:01:52.276103     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 07:02:03.106810   26835 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 07:02:03.106836   26835 cni.go:84] Creating CNI manager for ""
	I0915 07:02:03.106847   26835 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0915 07:02:03.108556   26835 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0915 07:02:03.110033   26835 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0915 07:02:03.115651   26835 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0915 07:02:03.115666   26835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0915 07:02:03.135130   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0915 07:02:03.516090   26835 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 07:02:03.516167   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:03.516174   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-670527 minikube.k8s.io/updated_at=2024_09_15T07_02_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=ha-670527 minikube.k8s.io/primary=true
	I0915 07:02:03.720688   26835 ops.go:34] apiserver oom_adj: -16
	I0915 07:02:03.720852   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:04.220992   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:04.721026   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:05.221544   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:05.721967   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:06.221944   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:06.721918   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 07:02:06.825074   26835 kubeadm.go:1113] duration metric: took 3.30897778s to wait for elevateKubeSystemPrivileges
	I0915 07:02:06.825124   26835 kubeadm.go:394] duration metric: took 14.796790647s to StartCluster
	I0915 07:02:06.825151   26835 settings.go:142] acquiring lock: {Name:mkf5235d72fa0db4ee272126c244284fe5de298a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:02:06.825248   26835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:02:06.826001   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/kubeconfig: {Name:mk08ce8dc701ab8f3c73b1f6ae730da0fbf561bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:02:06.826205   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 07:02:06.826222   26835 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0915 07:02:06.826203   26835 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:02:06.826275   26835 addons.go:69] Setting storage-provisioner=true in profile "ha-670527"
	I0915 07:02:06.826288   26835 addons.go:234] Setting addon storage-provisioner=true in "ha-670527"
	I0915 07:02:06.826289   26835 addons.go:69] Setting default-storageclass=true in profile "ha-670527"
	I0915 07:02:06.826312   26835 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-670527"
	I0915 07:02:06.826319   26835 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:02:06.826277   26835 start.go:241] waiting for startup goroutines ...
	I0915 07:02:06.826431   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:02:06.826661   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:06.826699   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:06.826761   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:06.826798   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:06.841457   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45393
	I0915 07:02:06.841556   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I0915 07:02:06.841985   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:06.842009   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:06.842500   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:06.842506   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:06.842517   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:06.842520   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:06.842859   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:06.842871   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:06.843077   26835 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:02:06.843364   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:06.843395   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:06.845283   26835 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:02:06.845642   26835 kapi.go:59] client config for ha-670527: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.crt", KeyFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key", CAFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 07:02:06.846317   26835 cert_rotation.go:140] Starting client certificate rotation controller
	I0915 07:02:06.846760   26835 addons.go:234] Setting addon default-storageclass=true in "ha-670527"
	I0915 07:02:06.846802   26835 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:02:06.847165   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:06.847203   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:06.858378   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0915 07:02:06.858780   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:06.859194   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:06.859214   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:06.859571   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:06.859751   26835 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:02:06.861452   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:02:06.861502   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0915 07:02:06.861922   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:06.862339   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:06.862361   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:06.862757   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:06.863263   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:06.863348   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:06.863393   26835 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 07:02:06.864862   26835 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 07:02:06.864883   26835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 07:02:06.864900   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:02:06.867717   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:06.868106   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:02:06.868131   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:06.868252   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:02:06.868413   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:02:06.868595   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:02:06.868701   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:02:06.878680   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0915 07:02:06.879120   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:06.879562   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:06.879592   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:06.879969   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:06.880136   26835 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:02:06.881611   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:02:06.881843   26835 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 07:02:06.881859   26835 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 07:02:06.881877   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:02:06.884389   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:06.884768   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:02:06.884793   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:06.884948   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:02:06.885116   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:02:06.885279   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:02:06.885395   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:02:06.935561   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 07:02:07.037232   26835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 07:02:07.056443   26835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 07:02:07.536670   26835 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0915 07:02:07.917700   26835 main.go:141] libmachine: Making call to close driver server
	I0915 07:02:07.917729   26835 main.go:141] libmachine: (ha-670527) Calling .Close
	I0915 07:02:07.917717   26835 main.go:141] libmachine: Making call to close driver server
	I0915 07:02:07.917799   26835 main.go:141] libmachine: (ha-670527) Calling .Close
	I0915 07:02:07.918090   26835 main.go:141] libmachine: (ha-670527) DBG | Closing plugin on server side
	I0915 07:02:07.918095   26835 main.go:141] libmachine: (ha-670527) DBG | Closing plugin on server side
	I0915 07:02:07.918117   26835 main.go:141] libmachine: Successfully made call to close driver server
	I0915 07:02:07.918182   26835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 07:02:07.918192   26835 main.go:141] libmachine: Making call to close driver server
	I0915 07:02:07.918208   26835 main.go:141] libmachine: (ha-670527) Calling .Close
	I0915 07:02:07.918228   26835 main.go:141] libmachine: Successfully made call to close driver server
	I0915 07:02:07.918260   26835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 07:02:07.918273   26835 main.go:141] libmachine: Making call to close driver server
	I0915 07:02:07.918292   26835 main.go:141] libmachine: (ha-670527) Calling .Close
	I0915 07:02:07.918444   26835 main.go:141] libmachine: Successfully made call to close driver server
	I0915 07:02:07.918471   26835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 07:02:07.918501   26835 main.go:141] libmachine: (ha-670527) DBG | Closing plugin on server side
	I0915 07:02:07.918519   26835 main.go:141] libmachine: Successfully made call to close driver server
	I0915 07:02:07.918531   26835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 07:02:07.918607   26835 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0915 07:02:07.918627   26835 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0915 07:02:07.918743   26835 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0915 07:02:07.918753   26835 round_trippers.go:469] Request Headers:
	I0915 07:02:07.918765   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:02:07.918773   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:02:07.932551   26835 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0915 07:02:07.933454   26835 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0915 07:02:07.933473   26835 round_trippers.go:469] Request Headers:
	I0915 07:02:07.933489   26835 round_trippers.go:473]     Content-Type: application/json
	I0915 07:02:07.933496   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:02:07.933500   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:02:07.937740   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:02:07.937929   26835 main.go:141] libmachine: Making call to close driver server
	I0915 07:02:07.937943   26835 main.go:141] libmachine: (ha-670527) Calling .Close
	I0915 07:02:07.938292   26835 main.go:141] libmachine: Successfully made call to close driver server
	I0915 07:02:07.938328   26835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 07:02:07.938329   26835 main.go:141] libmachine: (ha-670527) DBG | Closing plugin on server side
	I0915 07:02:07.940177   26835 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0915 07:02:07.941522   26835 addons.go:510] duration metric: took 1.115301832s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0915 07:02:07.941556   26835 start.go:246] waiting for cluster config update ...
	I0915 07:02:07.941569   26835 start.go:255] writing updated cluster config ...
	I0915 07:02:07.943080   26835 out.go:201] 
	I0915 07:02:07.944459   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:02:07.944560   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:02:07.946065   26835 out.go:177] * Starting "ha-670527-m02" control-plane node in "ha-670527" cluster
	I0915 07:02:07.947264   26835 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:02:07.947289   26835 cache.go:56] Caching tarball of preloaded images
	I0915 07:02:07.947402   26835 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:02:07.947416   26835 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:02:07.947521   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:02:07.947727   26835 start.go:360] acquireMachinesLock for ha-670527-m02: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:02:07.947782   26835 start.go:364] duration metric: took 32.742µs to acquireMachinesLock for "ha-670527-m02"
	I0915 07:02:07.947804   26835 start.go:93] Provisioning new machine with config: &{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:02:07.947898   26835 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0915 07:02:07.949571   26835 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 07:02:07.949670   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:07.949710   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:07.964465   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41537
	I0915 07:02:07.964840   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:07.965294   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:07.965315   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:07.965702   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:07.965905   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetMachineName
	I0915 07:02:07.966037   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:07.966246   26835 start.go:159] libmachine.API.Create for "ha-670527" (driver="kvm2")
	I0915 07:02:07.966278   26835 client.go:168] LocalClient.Create starting
	I0915 07:02:07.966313   26835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem
	I0915 07:02:07.966359   26835 main.go:141] libmachine: Decoding PEM data...
	I0915 07:02:07.966386   26835 main.go:141] libmachine: Parsing certificate...
	I0915 07:02:07.966455   26835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem
	I0915 07:02:07.966483   26835 main.go:141] libmachine: Decoding PEM data...
	I0915 07:02:07.966500   26835 main.go:141] libmachine: Parsing certificate...
	I0915 07:02:07.966525   26835 main.go:141] libmachine: Running pre-create checks...
	I0915 07:02:07.966537   26835 main.go:141] libmachine: (ha-670527-m02) Calling .PreCreateCheck
	I0915 07:02:07.966712   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetConfigRaw
	I0915 07:02:07.967130   26835 main.go:141] libmachine: Creating machine...
	I0915 07:02:07.967148   26835 main.go:141] libmachine: (ha-670527-m02) Calling .Create
	I0915 07:02:07.967289   26835 main.go:141] libmachine: (ha-670527-m02) Creating KVM machine...
	I0915 07:02:07.968555   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found existing default KVM network
	I0915 07:02:07.968645   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found existing private KVM network mk-ha-670527
	I0915 07:02:07.968783   26835 main.go:141] libmachine: (ha-670527-m02) Setting up store path in /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02 ...
	I0915 07:02:07.968815   26835 main.go:141] libmachine: (ha-670527-m02) Building disk image from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso
	I0915 07:02:07.968846   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:07.968755   27180 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:02:07.968929   26835 main.go:141] libmachine: (ha-670527-m02) Downloading /home/jenkins/minikube-integration/19644-6166/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso...
	I0915 07:02:08.201572   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:08.201469   27180 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa...
	I0915 07:02:08.335695   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:08.335566   27180 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/ha-670527-m02.rawdisk...
	I0915 07:02:08.335731   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Writing magic tar header
	I0915 07:02:08.335746   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Writing SSH key tar header
	I0915 07:02:08.335765   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:08.335695   27180 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02 ...
	I0915 07:02:08.335880   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02
	I0915 07:02:08.335936   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines
	I0915 07:02:08.335951   26835 main.go:141] libmachine: (ha-670527-m02) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02 (perms=drwx------)
	I0915 07:02:08.335967   26835 main.go:141] libmachine: (ha-670527-m02) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines (perms=drwxr-xr-x)
	I0915 07:02:08.335982   26835 main.go:141] libmachine: (ha-670527-m02) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube (perms=drwxr-xr-x)
	I0915 07:02:08.335995   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:02:08.336008   26835 main.go:141] libmachine: (ha-670527-m02) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166 (perms=drwxrwxr-x)
	I0915 07:02:08.336020   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166
	I0915 07:02:08.336042   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0915 07:02:08.336054   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Checking permissions on dir: /home/jenkins
	I0915 07:02:08.336063   26835 main.go:141] libmachine: (ha-670527-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0915 07:02:08.336076   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Checking permissions on dir: /home
	I0915 07:02:08.336092   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Skipping /home - not owner
	I0915 07:02:08.336103   26835 main.go:141] libmachine: (ha-670527-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0915 07:02:08.336125   26835 main.go:141] libmachine: (ha-670527-m02) Creating domain...
	I0915 07:02:08.337089   26835 main.go:141] libmachine: (ha-670527-m02) define libvirt domain using xml: 
	I0915 07:02:08.337108   26835 main.go:141] libmachine: (ha-670527-m02) <domain type='kvm'>
	I0915 07:02:08.337118   26835 main.go:141] libmachine: (ha-670527-m02)   <name>ha-670527-m02</name>
	I0915 07:02:08.337131   26835 main.go:141] libmachine: (ha-670527-m02)   <memory unit='MiB'>2200</memory>
	I0915 07:02:08.337143   26835 main.go:141] libmachine: (ha-670527-m02)   <vcpu>2</vcpu>
	I0915 07:02:08.337154   26835 main.go:141] libmachine: (ha-670527-m02)   <features>
	I0915 07:02:08.337163   26835 main.go:141] libmachine: (ha-670527-m02)     <acpi/>
	I0915 07:02:08.337172   26835 main.go:141] libmachine: (ha-670527-m02)     <apic/>
	I0915 07:02:08.337181   26835 main.go:141] libmachine: (ha-670527-m02)     <pae/>
	I0915 07:02:08.337188   26835 main.go:141] libmachine: (ha-670527-m02)     
	I0915 07:02:08.337199   26835 main.go:141] libmachine: (ha-670527-m02)   </features>
	I0915 07:02:08.337217   26835 main.go:141] libmachine: (ha-670527-m02)   <cpu mode='host-passthrough'>
	I0915 07:02:08.337227   26835 main.go:141] libmachine: (ha-670527-m02)   
	I0915 07:02:08.337234   26835 main.go:141] libmachine: (ha-670527-m02)   </cpu>
	I0915 07:02:08.337245   26835 main.go:141] libmachine: (ha-670527-m02)   <os>
	I0915 07:02:08.337256   26835 main.go:141] libmachine: (ha-670527-m02)     <type>hvm</type>
	I0915 07:02:08.337265   26835 main.go:141] libmachine: (ha-670527-m02)     <boot dev='cdrom'/>
	I0915 07:02:08.337275   26835 main.go:141] libmachine: (ha-670527-m02)     <boot dev='hd'/>
	I0915 07:02:08.337286   26835 main.go:141] libmachine: (ha-670527-m02)     <bootmenu enable='no'/>
	I0915 07:02:08.337311   26835 main.go:141] libmachine: (ha-670527-m02)   </os>
	I0915 07:02:08.337327   26835 main.go:141] libmachine: (ha-670527-m02)   <devices>
	I0915 07:02:08.337337   26835 main.go:141] libmachine: (ha-670527-m02)     <disk type='file' device='cdrom'>
	I0915 07:02:08.337348   26835 main.go:141] libmachine: (ha-670527-m02)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/boot2docker.iso'/>
	I0915 07:02:08.337356   26835 main.go:141] libmachine: (ha-670527-m02)       <target dev='hdc' bus='scsi'/>
	I0915 07:02:08.337362   26835 main.go:141] libmachine: (ha-670527-m02)       <readonly/>
	I0915 07:02:08.337369   26835 main.go:141] libmachine: (ha-670527-m02)     </disk>
	I0915 07:02:08.337376   26835 main.go:141] libmachine: (ha-670527-m02)     <disk type='file' device='disk'>
	I0915 07:02:08.337386   26835 main.go:141] libmachine: (ha-670527-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0915 07:02:08.337396   26835 main.go:141] libmachine: (ha-670527-m02)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/ha-670527-m02.rawdisk'/>
	I0915 07:02:08.337416   26835 main.go:141] libmachine: (ha-670527-m02)       <target dev='hda' bus='virtio'/>
	I0915 07:02:08.337432   26835 main.go:141] libmachine: (ha-670527-m02)     </disk>
	I0915 07:02:08.337442   26835 main.go:141] libmachine: (ha-670527-m02)     <interface type='network'>
	I0915 07:02:08.337453   26835 main.go:141] libmachine: (ha-670527-m02)       <source network='mk-ha-670527'/>
	I0915 07:02:08.337461   26835 main.go:141] libmachine: (ha-670527-m02)       <model type='virtio'/>
	I0915 07:02:08.337468   26835 main.go:141] libmachine: (ha-670527-m02)     </interface>
	I0915 07:02:08.337480   26835 main.go:141] libmachine: (ha-670527-m02)     <interface type='network'>
	I0915 07:02:08.337495   26835 main.go:141] libmachine: (ha-670527-m02)       <source network='default'/>
	I0915 07:02:08.337507   26835 main.go:141] libmachine: (ha-670527-m02)       <model type='virtio'/>
	I0915 07:02:08.337515   26835 main.go:141] libmachine: (ha-670527-m02)     </interface>
	I0915 07:02:08.337524   26835 main.go:141] libmachine: (ha-670527-m02)     <serial type='pty'>
	I0915 07:02:08.337531   26835 main.go:141] libmachine: (ha-670527-m02)       <target port='0'/>
	I0915 07:02:08.337543   26835 main.go:141] libmachine: (ha-670527-m02)     </serial>
	I0915 07:02:08.337551   26835 main.go:141] libmachine: (ha-670527-m02)     <console type='pty'>
	I0915 07:02:08.337560   26835 main.go:141] libmachine: (ha-670527-m02)       <target type='serial' port='0'/>
	I0915 07:02:08.337574   26835 main.go:141] libmachine: (ha-670527-m02)     </console>
	I0915 07:02:08.337585   26835 main.go:141] libmachine: (ha-670527-m02)     <rng model='virtio'>
	I0915 07:02:08.337594   26835 main.go:141] libmachine: (ha-670527-m02)       <backend model='random'>/dev/random</backend>
	I0915 07:02:08.337606   26835 main.go:141] libmachine: (ha-670527-m02)     </rng>
	I0915 07:02:08.337622   26835 main.go:141] libmachine: (ha-670527-m02)     
	I0915 07:02:08.337634   26835 main.go:141] libmachine: (ha-670527-m02)     
	I0915 07:02:08.337641   26835 main.go:141] libmachine: (ha-670527-m02)   </devices>
	I0915 07:02:08.337671   26835 main.go:141] libmachine: (ha-670527-m02) </domain>
	I0915 07:02:08.337702   26835 main.go:141] libmachine: (ha-670527-m02) 
	I0915 07:02:08.344146   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:35:4a:1c in network default
	I0915 07:02:08.344712   26835 main.go:141] libmachine: (ha-670527-m02) Ensuring networks are active...
	I0915 07:02:08.344730   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:08.345453   26835 main.go:141] libmachine: (ha-670527-m02) Ensuring network default is active
	I0915 07:02:08.345766   26835 main.go:141] libmachine: (ha-670527-m02) Ensuring network mk-ha-670527 is active
	I0915 07:02:08.346254   26835 main.go:141] libmachine: (ha-670527-m02) Getting domain xml...
	I0915 07:02:08.347074   26835 main.go:141] libmachine: (ha-670527-m02) Creating domain...
	I0915 07:02:09.543665   26835 main.go:141] libmachine: (ha-670527-m02) Waiting to get IP...
	I0915 07:02:09.544332   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:09.544734   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:09.544777   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:09.544722   27180 retry.go:31] will retry after 223.468124ms: waiting for machine to come up
	I0915 07:02:09.770366   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:09.770773   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:09.770797   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:09.770732   27180 retry.go:31] will retry after 238.513621ms: waiting for machine to come up
	I0915 07:02:10.011141   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:10.011607   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:10.011630   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:10.011583   27180 retry.go:31] will retry after 331.854292ms: waiting for machine to come up
	I0915 07:02:10.345142   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:10.345563   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:10.345587   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:10.345512   27180 retry.go:31] will retry after 603.907795ms: waiting for machine to come up
	I0915 07:02:10.951205   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:10.951571   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:10.951597   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:10.951535   27180 retry.go:31] will retry after 682.284876ms: waiting for machine to come up
	I0915 07:02:11.635334   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:11.635823   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:11.635847   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:11.635765   27180 retry.go:31] will retry after 624.967872ms: waiting for machine to come up
	I0915 07:02:12.261987   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:12.262355   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:12.262383   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:12.262328   27180 retry.go:31] will retry after 1.134334018s: waiting for machine to come up
	I0915 07:02:13.399207   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:13.399742   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:13.399771   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:13.399729   27180 retry.go:31] will retry after 1.375956263s: waiting for machine to come up
	I0915 07:02:14.777134   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:14.777563   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:14.777579   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:14.777513   27180 retry.go:31] will retry after 1.768180712s: waiting for machine to come up
	I0915 07:02:16.546805   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:16.547182   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:16.547224   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:16.547118   27180 retry.go:31] will retry after 1.716559811s: waiting for machine to come up
	I0915 07:02:18.265525   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:18.265902   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:18.265950   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:18.265878   27180 retry.go:31] will retry after 2.21601359s: waiting for machine to come up
	I0915 07:02:20.483051   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:20.483454   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:20.483506   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:20.483423   27180 retry.go:31] will retry after 3.099487423s: waiting for machine to come up
	I0915 07:02:23.584173   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:23.584557   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find current IP address of domain ha-670527-m02 in network mk-ha-670527
	I0915 07:02:23.584586   26835 main.go:141] libmachine: (ha-670527-m02) DBG | I0915 07:02:23.584508   27180 retry.go:31] will retry after 4.098648524s: waiting for machine to come up
	I0915 07:02:27.684343   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.684832   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has current primary IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.684858   26835 main.go:141] libmachine: (ha-670527-m02) Found IP for machine: 192.168.39.222
	I0915 07:02:27.684883   26835 main.go:141] libmachine: (ha-670527-m02) Reserving static IP address...
	I0915 07:02:27.685296   26835 main.go:141] libmachine: (ha-670527-m02) DBG | unable to find host DHCP lease matching {name: "ha-670527-m02", mac: "52:54:00:5d:e6:7b", ip: "192.168.39.222"} in network mk-ha-670527
	I0915 07:02:27.756355   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Getting to WaitForSSH function...
	I0915 07:02:27.756382   26835 main.go:141] libmachine: (ha-670527-m02) Reserved static IP address: 192.168.39.222
	I0915 07:02:27.756395   26835 main.go:141] libmachine: (ha-670527-m02) Waiting for SSH to be available...
	I0915 07:02:27.758799   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.759203   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:27.759239   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.759264   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Using SSH client type: external
	I0915 07:02:27.759280   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa (-rw-------)
	I0915 07:02:27.759360   26835 main.go:141] libmachine: (ha-670527-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 07:02:27.759381   26835 main.go:141] libmachine: (ha-670527-m02) DBG | About to run SSH command:
	I0915 07:02:27.759405   26835 main.go:141] libmachine: (ha-670527-m02) DBG | exit 0
	I0915 07:02:27.881993   26835 main.go:141] libmachine: (ha-670527-m02) DBG | SSH cmd err, output: <nil>: 
	I0915 07:02:27.882263   26835 main.go:141] libmachine: (ha-670527-m02) KVM machine creation complete!
	I0915 07:02:27.882572   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetConfigRaw
	I0915 07:02:27.883216   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:27.883392   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:27.883567   26835 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0915 07:02:27.883580   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetState
	I0915 07:02:27.884843   26835 main.go:141] libmachine: Detecting operating system of created instance...
	I0915 07:02:27.884854   26835 main.go:141] libmachine: Waiting for SSH to be available...
	I0915 07:02:27.884859   26835 main.go:141] libmachine: Getting to WaitForSSH function...
	I0915 07:02:27.884864   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:27.887269   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.887620   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:27.887645   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.887817   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:27.887994   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:27.888138   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:27.888271   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:27.888459   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:02:27.888737   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0915 07:02:27.888751   26835 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0915 07:02:27.985337   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:02:27.985360   26835 main.go:141] libmachine: Detecting the provisioner...
	I0915 07:02:27.985368   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:27.988310   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.988681   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:27.988710   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:27.988881   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:27.989093   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:27.989253   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:27.989382   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:27.989540   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:02:27.989706   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0915 07:02:27.989716   26835 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0915 07:02:28.086637   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0915 07:02:28.086737   26835 main.go:141] libmachine: found compatible host: buildroot
	I0915 07:02:28.086750   26835 main.go:141] libmachine: Provisioning with buildroot...
	I0915 07:02:28.086757   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetMachineName
	I0915 07:02:28.086989   26835 buildroot.go:166] provisioning hostname "ha-670527-m02"
	I0915 07:02:28.087009   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetMachineName
	I0915 07:02:28.087209   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:28.089734   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.090173   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.090192   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.090340   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:28.090536   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.090684   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.090836   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:28.090985   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:02:28.091140   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0915 07:02:28.091151   26835 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-670527-m02 && echo "ha-670527-m02" | sudo tee /etc/hostname
	I0915 07:02:28.204738   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-670527-m02
	
	I0915 07:02:28.204784   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:28.207639   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.207977   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.208016   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.208156   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:28.208320   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.208469   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.208591   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:28.208772   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:02:28.208959   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0915 07:02:28.208981   26835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-670527-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-670527-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-670527-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:02:28.314884   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:02:28.314912   26835 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 07:02:28.314931   26835 buildroot.go:174] setting up certificates
	I0915 07:02:28.314941   26835 provision.go:84] configureAuth start
	I0915 07:02:28.314952   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetMachineName
	I0915 07:02:28.315229   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:02:28.318150   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.318522   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.318550   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.318741   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:28.320813   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.321195   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.321222   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.321330   26835 provision.go:143] copyHostCerts
	I0915 07:02:28.321372   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:02:28.321420   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 07:02:28.321432   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:02:28.321512   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 07:02:28.321614   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:02:28.321642   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 07:02:28.321650   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:02:28.321691   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 07:02:28.321857   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:02:28.321909   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 07:02:28.321919   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:02:28.321978   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 07:02:28.322077   26835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.ha-670527-m02 san=[127.0.0.1 192.168.39.222 ha-670527-m02 localhost minikube]
	I0915 07:02:28.421330   26835 provision.go:177] copyRemoteCerts
	I0915 07:02:28.421383   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:02:28.421405   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:28.424601   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.424944   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.424972   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.425197   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:28.425370   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.425520   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:28.425644   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	I0915 07:02:28.503791   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0915 07:02:28.503873   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 07:02:28.527264   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0915 07:02:28.527349   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 07:02:28.551098   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0915 07:02:28.551176   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:02:28.574646   26835 provision.go:87] duration metric: took 259.693344ms to configureAuth
	I0915 07:02:28.574675   26835 buildroot.go:189] setting minikube options for container-runtime
	I0915 07:02:28.574894   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:02:28.574983   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:28.577824   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.578168   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.578194   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.578371   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:28.578605   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.578762   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.578892   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:28.579005   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:02:28.579184   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0915 07:02:28.579208   26835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:02:28.795402   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:02:28.795431   26835 main.go:141] libmachine: Checking connection to Docker...
	I0915 07:02:28.795440   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetURL
	I0915 07:02:28.796731   26835 main.go:141] libmachine: (ha-670527-m02) DBG | Using libvirt version 6000000
	I0915 07:02:28.799441   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.799809   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.799839   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.800000   26835 main.go:141] libmachine: Docker is up and running!
	I0915 07:02:28.800012   26835 main.go:141] libmachine: Reticulating splines...
	I0915 07:02:28.800018   26835 client.go:171] duration metric: took 20.833732537s to LocalClient.Create
	I0915 07:02:28.800039   26835 start.go:167] duration metric: took 20.833793606s to libmachine.API.Create "ha-670527"
	I0915 07:02:28.800051   26835 start.go:293] postStartSetup for "ha-670527-m02" (driver="kvm2")
	I0915 07:02:28.800064   26835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:02:28.800086   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:28.800278   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:02:28.800295   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:28.802429   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.802753   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.802779   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.802940   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:28.803104   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.803264   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:28.803366   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	I0915 07:02:28.880649   26835 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:02:28.884603   26835 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 07:02:28.884624   26835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 07:02:28.884686   26835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 07:02:28.884754   26835 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 07:02:28.884767   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /etc/ssl/certs/131902.pem
	I0915 07:02:28.884845   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:02:28.894685   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:02:28.918001   26835 start.go:296] duration metric: took 117.936297ms for postStartSetup
	I0915 07:02:28.918048   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetConfigRaw
	I0915 07:02:28.918617   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:02:28.920944   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.921231   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.921258   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.921446   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:02:28.921629   26835 start.go:128] duration metric: took 20.973719773s to createHost
	I0915 07:02:28.921649   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:28.923851   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.924166   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:28.924185   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:28.924338   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:28.924520   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.924676   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:28.924813   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:28.924953   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:02:28.925114   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0915 07:02:28.925126   26835 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 07:02:29.022483   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726383748.993202938
	
	I0915 07:02:29.022502   26835 fix.go:216] guest clock: 1726383748.993202938
	I0915 07:02:29.022508   26835 fix.go:229] Guest: 2024-09-15 07:02:28.993202938 +0000 UTC Remote: 2024-09-15 07:02:28.921638714 +0000 UTC m=+66.617004315 (delta=71.564224ms)
	I0915 07:02:29.022522   26835 fix.go:200] guest clock delta is within tolerance: 71.564224ms
	I0915 07:02:29.022527   26835 start.go:83] releasing machines lock for "ha-670527-m02", held for 21.074734352s
	I0915 07:02:29.022542   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:29.022820   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:02:29.025216   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:29.025603   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:29.025630   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:29.027979   26835 out.go:177] * Found network options:
	I0915 07:02:29.029139   26835 out.go:177]   - NO_PROXY=192.168.39.54
	W0915 07:02:29.030186   26835 proxy.go:119] fail to check proxy env: Error ip not in block
	I0915 07:02:29.030215   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:29.030670   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:29.030830   26835 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:02:29.030909   26835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:02:29.030944   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	W0915 07:02:29.031015   26835 proxy.go:119] fail to check proxy env: Error ip not in block
	I0915 07:02:29.031086   26835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:02:29.031108   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:02:29.033444   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:29.033582   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:29.033857   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:29.033891   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:29.033918   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:29.033936   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:29.034071   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:29.034185   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:02:29.034271   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:29.034356   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:02:29.034389   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:29.034517   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:02:29.034520   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	I0915 07:02:29.034637   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	I0915 07:02:29.274563   26835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 07:02:29.281548   26835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 07:02:29.281626   26835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:02:29.298606   26835 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 07:02:29.298636   26835 start.go:495] detecting cgroup driver to use...
	I0915 07:02:29.298697   26835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:02:29.316035   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:02:29.331209   26835 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:02:29.331268   26835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:02:29.346284   26835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:02:29.360065   26835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:02:29.481409   26835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:02:29.645450   26835 docker.go:233] disabling docker service ...
	I0915 07:02:29.645525   26835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:02:29.660845   26835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:02:29.673836   26835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:02:29.793386   26835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:02:29.917775   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:02:29.932542   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:02:29.951401   26835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:02:29.951456   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:02:29.961788   26835 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:02:29.961858   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:02:29.972394   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:02:29.982699   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:02:29.993216   26835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:02:30.004113   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:02:30.015561   26835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:02:30.033437   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:02:30.044452   26835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:02:30.054254   26835 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0915 07:02:30.054304   26835 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0915 07:02:30.067082   26835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:02:30.076775   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:02:30.191355   26835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:02:30.289201   26835 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:02:30.289276   26835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:02:30.293893   26835 start.go:563] Will wait 60s for crictl version
	I0915 07:02:30.293943   26835 ssh_runner.go:195] Run: which crictl
	I0915 07:02:30.297544   26835 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:02:30.346844   26835 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 07:02:30.346933   26835 ssh_runner.go:195] Run: crio --version
	I0915 07:02:30.380576   26835 ssh_runner.go:195] Run: crio --version
	I0915 07:02:30.411524   26835 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 07:02:30.413233   26835 out.go:177]   - env NO_PROXY=192.168.39.54
	I0915 07:02:30.414608   26835 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:02:30.417050   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:30.417313   26835 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:02:22 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:02:30.417340   26835 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:02:30.417499   26835 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0915 07:02:30.421898   26835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:02:30.435272   26835 mustload.go:65] Loading cluster: ha-670527
	I0915 07:02:30.435496   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:02:30.435748   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:30.435784   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:30.450257   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33997
	I0915 07:02:30.450737   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:30.451257   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:30.451281   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:30.451570   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:30.451738   26835 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:02:30.453187   26835 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:02:30.453516   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:30.453553   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:30.467729   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46585
	I0915 07:02:30.468174   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:30.468573   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:30.468592   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:30.468866   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:30.468993   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:02:30.469125   26835 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527 for IP: 192.168.39.222
	I0915 07:02:30.469152   26835 certs.go:194] generating shared ca certs ...
	I0915 07:02:30.469164   26835 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:02:30.469278   26835 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 07:02:30.469314   26835 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 07:02:30.469322   26835 certs.go:256] generating profile certs ...
	I0915 07:02:30.469384   26835 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key
	I0915 07:02:30.469408   26835 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.5e13d2d7
	I0915 07:02:30.469422   26835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.5e13d2d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.222 192.168.39.254]
	I0915 07:02:30.555578   26835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.5e13d2d7 ...
	I0915 07:02:30.555605   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.5e13d2d7: {Name:mk9d3e3970fd43c4cc01395eb4af6ffaf9bbfa94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:02:30.555762   26835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.5e13d2d7 ...
	I0915 07:02:30.555774   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.5e13d2d7: {Name:mkdb7ccda7f27e402ed4041657e1289ce0e105a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:02:30.555835   26835 certs.go:381] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.5e13d2d7 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt
	I0915 07:02:30.555958   26835 certs.go:385] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.5e13d2d7 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key
	I0915 07:02:30.556078   26835 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key
	I0915 07:02:30.556092   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0915 07:02:30.556105   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0915 07:02:30.556118   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0915 07:02:30.556130   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0915 07:02:30.556149   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0915 07:02:30.556163   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0915 07:02:30.556175   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0915 07:02:30.556192   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0915 07:02:30.556238   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 07:02:30.556265   26835 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 07:02:30.556276   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:02:30.556301   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:02:30.556322   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:02:30.556344   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 07:02:30.556381   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:02:30.556404   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:02:30.556418   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem -> /usr/share/ca-certificates/13190.pem
	I0915 07:02:30.556430   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /usr/share/ca-certificates/131902.pem
	I0915 07:02:30.556459   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:02:30.559065   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:30.559349   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:02:30.559367   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:30.559524   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:02:30.559699   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:02:30.559800   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:02:30.559886   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:02:30.634233   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0915 07:02:30.639638   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0915 07:02:30.651969   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0915 07:02:30.656344   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0915 07:02:30.670370   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0915 07:02:30.674990   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0915 07:02:30.685502   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0915 07:02:30.689789   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0915 07:02:30.701427   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0915 07:02:30.705820   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0915 07:02:30.716115   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0915 07:02:30.720165   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0915 07:02:30.730491   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:02:30.758776   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:02:30.786053   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:02:30.812918   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 07:02:30.839709   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0915 07:02:30.865241   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 07:02:30.887692   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 07:02:30.909831   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 07:02:30.932076   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:02:30.954043   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 07:02:30.980964   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 07:02:31.007544   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0915 07:02:31.025713   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0915 07:02:31.043734   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0915 07:02:31.061392   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0915 07:02:31.079044   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0915 07:02:31.096440   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0915 07:02:31.114730   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0915 07:02:31.133403   26835 ssh_runner.go:195] Run: openssl version
	I0915 07:02:31.139205   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:02:31.150172   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:02:31.155101   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:02:31.155163   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:02:31.160723   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:02:31.171690   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 07:02:31.182811   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 07:02:31.187381   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 07:02:31.187428   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 07:02:31.193069   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 07:02:31.203749   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 07:02:31.214303   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 07:02:31.219142   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 07:02:31.219208   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 07:02:31.225126   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:02:31.236094   26835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:02:31.240285   26835 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 07:02:31.240331   26835 kubeadm.go:934] updating node {m02 192.168.39.222 8443 v1.31.1 crio true true} ...
	I0915 07:02:31.240423   26835 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-670527-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:02:31.240456   26835 kube-vip.go:115] generating kube-vip config ...
	I0915 07:02:31.240499   26835 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0915 07:02:31.257343   26835 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0915 07:02:31.257420   26835 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0915 07:02:31.257479   26835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 07:02:31.267300   26835 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0915 07:02:31.267363   26835 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0915 07:02:31.276806   26835 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0915 07:02:31.276830   26835 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0915 07:02:31.276844   26835 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0915 07:02:31.276832   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0915 07:02:31.276965   26835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0915 07:02:31.281259   26835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0915 07:02:31.281285   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0915 07:02:33.423127   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0915 07:02:33.423198   26835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0915 07:02:33.428293   26835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0915 07:02:33.428323   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0915 07:02:34.469788   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:02:34.485662   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0915 07:02:34.485758   26835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0915 07:02:34.490171   26835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0915 07:02:34.490205   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0915 07:02:34.799607   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0915 07:02:34.809569   26835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0915 07:02:34.827915   26835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:02:34.845023   26835 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0915 07:02:34.861258   26835 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0915 07:02:34.865438   26835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:02:34.877732   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:02:35.000696   26835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:02:35.018898   26835 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:02:35.019383   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:02:35.019436   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:02:35.034104   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45453
	I0915 07:02:35.034487   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:02:35.034941   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:02:35.034958   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:02:35.035235   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:02:35.035476   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:02:35.035625   26835 start.go:317] joinCluster: &{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:02:35.035755   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0915 07:02:35.035775   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:02:35.038626   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:35.038972   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:02:35.038996   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:02:35.039145   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:02:35.039311   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:02:35.039444   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:02:35.039578   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:02:35.208601   26835 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:02:35.208645   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 50hbpr.238ifb3e9gglapy2 --discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-670527-m02 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443"
	I0915 07:02:57.207397   26835 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 50hbpr.238ifb3e9gglapy2 --discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-670527-m02 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443": (21.998728599s)
	I0915 07:02:57.207432   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0915 07:02:57.776899   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-670527-m02 minikube.k8s.io/updated_at=2024_09_15T07_02_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=ha-670527 minikube.k8s.io/primary=false
	I0915 07:02:57.893456   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-670527-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0915 07:02:58.002512   26835 start.go:319] duration metric: took 22.966886384s to joinCluster
	I0915 07:02:58.002576   26835 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:02:58.002874   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:02:58.004496   26835 out.go:177] * Verifying Kubernetes components...
	I0915 07:02:58.005948   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:02:58.281369   26835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:02:58.297533   26835 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:02:58.297786   26835 kapi.go:59] client config for ha-670527: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.crt", KeyFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key", CAFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0915 07:02:58.297873   26835 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.54:8443
	I0915 07:02:58.298084   26835 node_ready.go:35] waiting up to 6m0s for node "ha-670527-m02" to be "Ready" ...
	I0915 07:02:58.298195   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:02:58.298206   26835 round_trippers.go:469] Request Headers:
	I0915 07:02:58.298217   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:02:58.298224   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:02:58.309029   26835 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0915 07:02:58.798950   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:02:58.798970   26835 round_trippers.go:469] Request Headers:
	I0915 07:02:58.798977   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:02:58.798981   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:02:58.803287   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:02:59.298330   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:02:59.298355   26835 round_trippers.go:469] Request Headers:
	I0915 07:02:59.298363   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:02:59.298366   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:02:59.302108   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:02:59.799045   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:02:59.799070   26835 round_trippers.go:469] Request Headers:
	I0915 07:02:59.799087   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:02:59.799093   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:02:59.803446   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:03:00.299017   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:00.299038   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:00.299048   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:00.299055   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:00.303017   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:00.303922   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:00.799140   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:00.799164   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:00.799175   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:00.799180   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:00.802947   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:01.298949   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:01.298969   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:01.298976   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:01.298980   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:01.302732   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:01.798310   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:01.798330   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:01.798338   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:01.798343   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:01.801085   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:02.299204   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:02.299224   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:02.299232   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:02.299235   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:02.302816   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:02.798453   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:02.798473   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:02.798481   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:02.798485   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:02.802331   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:02.802892   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:03.298701   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:03.298740   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:03.298751   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:03.298757   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:03.301969   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:03.799044   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:03.799065   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:03.799073   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:03.799077   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:03.802064   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:04.299042   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:04.299062   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:04.299070   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:04.299074   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:04.302546   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:04.798566   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:04.798588   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:04.798599   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:04.798603   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:04.802128   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:05.298324   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:05.298343   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:05.298351   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:05.298355   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:05.301627   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:05.304686   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:05.798531   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:05.798552   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:05.798560   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:05.798565   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:05.805362   26835 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:03:06.298962   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:06.298986   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:06.298994   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:06.298999   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:06.301984   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:06.799027   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:06.799049   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:06.799059   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:06.799064   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:06.805682   26835 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:03:07.298899   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:07.298920   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:07.298927   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:07.298930   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:07.302021   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:07.798423   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:07.798449   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:07.798457   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:07.798465   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:07.801464   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:07.802365   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:08.299098   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:08.299117   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:08.299124   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:08.299129   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:08.301958   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:08.799072   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:08.799096   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:08.799105   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:08.799110   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:08.802190   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:09.299221   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:09.299242   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:09.299251   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:09.299254   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:09.302314   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:09.799051   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:09.799073   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:09.799081   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:09.799087   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:09.802695   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:09.803308   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:10.299120   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:10.299143   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:10.299152   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:10.299160   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:10.302205   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:10.799023   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:10.799045   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:10.799055   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:10.799062   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:10.802433   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:11.298625   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:11.298644   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:11.298652   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:11.298656   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:11.301528   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:11.799072   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:11.799096   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:11.799107   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:11.799113   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:11.801945   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:12.299041   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:12.299062   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:12.299070   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:12.299091   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:12.302359   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:12.302978   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:12.799029   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:12.799052   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:12.799059   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:12.799063   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:12.802509   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:13.298858   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:13.298879   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:13.298886   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:13.298891   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:13.302105   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:13.799152   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:13.799172   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:13.799180   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:13.799184   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:13.802280   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:14.299044   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:14.299063   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:14.299071   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:14.299074   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:14.301972   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:14.799071   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:14.799094   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:14.799103   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:14.799110   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:14.802858   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:14.803543   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:15.298423   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:15.298443   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:15.298450   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:15.298456   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:15.301463   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:15.799040   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:15.799060   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:15.799067   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:15.799071   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:15.802044   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:16.299050   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:16.299073   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:16.299085   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:16.299091   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:16.302207   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:16.799031   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:16.799051   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:16.799058   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:16.799061   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:16.802023   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.299041   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:17.299066   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.299076   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.299081   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.301770   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.302269   26835 node_ready.go:53] node "ha-670527-m02" has status "Ready":"False"
	I0915 07:03:17.799048   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:17.799068   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.799076   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.799080   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.802248   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:17.802880   26835 node_ready.go:49] node "ha-670527-m02" has status "Ready":"True"
	I0915 07:03:17.802897   26835 node_ready.go:38] duration metric: took 19.504788225s for node "ha-670527-m02" to be "Ready" ...
	I0915 07:03:17.802907   26835 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:03:17.803009   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:03:17.803023   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.803033   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.803037   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.807239   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:03:17.813322   26835 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4w6x7" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.813405   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4w6x7
	I0915 07:03:17.813416   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.813426   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.813432   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.816253   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.817178   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:17.817193   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.817201   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.817206   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.819382   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.819889   26835 pod_ready.go:93] pod "coredns-7c65d6cfc9-4w6x7" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:17.819905   26835 pod_ready.go:82] duration metric: took 6.561965ms for pod "coredns-7c65d6cfc9-4w6x7" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.819916   26835 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lpj44" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.819970   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-lpj44
	I0915 07:03:17.819979   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.819989   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.819995   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.822316   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.823272   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:17.823286   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.823293   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.823297   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.825230   26835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0915 07:03:17.825653   26835 pod_ready.go:93] pod "coredns-7c65d6cfc9-lpj44" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:17.825667   26835 pod_ready.go:82] duration metric: took 5.744951ms for pod "coredns-7c65d6cfc9-lpj44" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.825675   26835 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.825716   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-670527
	I0915 07:03:17.825723   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.825730   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.825733   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.827910   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.828335   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:17.828349   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.828357   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.828361   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.830477   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.830858   26835 pod_ready.go:93] pod "etcd-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:17.830872   26835 pod_ready.go:82] duration metric: took 5.191041ms for pod "etcd-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.830880   26835 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.830918   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-670527-m02
	I0915 07:03:17.830928   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.830935   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.830940   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.833032   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:17.833460   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:17.833473   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.833480   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.833483   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:17.835371   26835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0915 07:03:17.835725   26835 pod_ready.go:93] pod "etcd-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:17.835739   26835 pod_ready.go:82] duration metric: took 4.853737ms for pod "etcd-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.835751   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:17.999289   26835 request.go:632] Waited for 163.492142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527
	I0915 07:03:17.999360   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527
	I0915 07:03:17.999371   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:17.999381   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:17.999393   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:18.003149   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:18.199247   26835 request.go:632] Waited for 195.2673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:18.199321   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:18.199328   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:18.199338   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:18.199350   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:18.202285   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:18.202824   26835 pod_ready.go:93] pod "kube-apiserver-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:18.202840   26835 pod_ready.go:82] duration metric: took 367.082845ms for pod "kube-apiserver-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:18.202849   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:18.400062   26835 request.go:632] Waited for 197.14969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527-m02
	I0915 07:03:18.400162   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527-m02
	I0915 07:03:18.400174   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:18.400185   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:18.400192   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:18.403454   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:18.599574   26835 request.go:632] Waited for 195.382614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:18.599625   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:18.599632   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:18.599645   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:18.599651   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:18.606574   26835 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:03:18.607107   26835 pod_ready.go:93] pod "kube-apiserver-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:18.607125   26835 pod_ready.go:82] duration metric: took 404.270298ms for pod "kube-apiserver-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:18.607134   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:18.799081   26835 request.go:632] Waited for 191.883757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527
	I0915 07:03:18.799145   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527
	I0915 07:03:18.799151   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:18.799158   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:18.799162   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:18.802381   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:18.999738   26835 request.go:632] Waited for 196.363128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:18.999821   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:18.999832   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:18.999840   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:18.999844   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:19.003038   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:19.003594   26835 pod_ready.go:93] pod "kube-controller-manager-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:19.003613   26835 pod_ready.go:82] duration metric: took 396.471292ms for pod "kube-controller-manager-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:19.003628   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:19.199678   26835 request.go:632] Waited for 195.975884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527-m02
	I0915 07:03:19.199745   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527-m02
	I0915 07:03:19.199752   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:19.199761   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:19.199768   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:19.203357   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:19.399417   26835 request.go:632] Waited for 195.353477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:19.399506   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:19.399518   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:19.399528   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:19.399535   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:19.402623   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:19.403171   26835 pod_ready.go:93] pod "kube-controller-manager-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:19.403193   26835 pod_ready.go:82] duration metric: took 399.556435ms for pod "kube-controller-manager-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:19.403206   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25xtk" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:19.599292   26835 request.go:632] Waited for 196.019957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25xtk
	I0915 07:03:19.599372   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25xtk
	I0915 07:03:19.599383   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:19.599394   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:19.599403   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:19.602327   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:19.799354   26835 request.go:632] Waited for 196.344034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:19.799408   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:19.799413   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:19.799420   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:19.799423   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:19.802227   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:19.803970   26835 pod_ready.go:93] pod "kube-proxy-25xtk" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:19.803991   26835 pod_ready.go:82] duration metric: took 400.772903ms for pod "kube-proxy-25xtk" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:19.804002   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kt79t" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:19.999969   26835 request.go:632] Waited for 195.901993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kt79t
	I0915 07:03:20.000067   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kt79t
	I0915 07:03:20.000076   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:20.000086   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:20.000096   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:20.003916   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:20.199923   26835 request.go:632] Waited for 195.280331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:20.199979   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:20.199986   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:20.199996   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:20.200001   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:20.203332   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:20.203957   26835 pod_ready.go:93] pod "kube-proxy-kt79t" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:20.203977   26835 pod_ready.go:82] duration metric: took 399.967571ms for pod "kube-proxy-kt79t" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:20.203989   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:20.399069   26835 request.go:632] Waited for 195.010415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527
	I0915 07:03:20.399130   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527
	I0915 07:03:20.399136   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:20.399143   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:20.399146   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:20.403009   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:20.599403   26835 request.go:632] Waited for 195.788748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:20.599463   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:03:20.599471   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:20.599480   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:20.599485   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:20.602055   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:20.602618   26835 pod_ready.go:93] pod "kube-scheduler-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:20.602636   26835 pod_ready.go:82] duration metric: took 398.640734ms for pod "kube-scheduler-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:20.602646   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:20.799552   26835 request.go:632] Waited for 196.846292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527-m02
	I0915 07:03:20.799620   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527-m02
	I0915 07:03:20.799627   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:20.799634   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:20.799638   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:20.802765   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:20.999660   26835 request.go:632] Waited for 196.342764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:20.999732   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:03:20.999738   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:20.999744   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:20.999747   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:21.002704   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:03:21.003350   26835 pod_ready.go:93] pod "kube-scheduler-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:03:21.003385   26835 pod_ready.go:82] duration metric: took 400.731335ms for pod "kube-scheduler-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:03:21.003401   26835 pod_ready.go:39] duration metric: took 3.200461526s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:03:21.003421   26835 api_server.go:52] waiting for apiserver process to appear ...
	I0915 07:03:21.003481   26835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:03:21.021041   26835 api_server.go:72] duration metric: took 23.018438257s to wait for apiserver process to appear ...
	I0915 07:03:21.021070   26835 api_server.go:88] waiting for apiserver healthz status ...
	I0915 07:03:21.021102   26835 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0915 07:03:21.028517   26835 api_server.go:279] https://192.168.39.54:8443/healthz returned 200:
	ok
	I0915 07:03:21.028579   26835 round_trippers.go:463] GET https://192.168.39.54:8443/version
	I0915 07:03:21.028587   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:21.028594   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:21.028598   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:21.029612   26835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0915 07:03:21.029684   26835 api_server.go:141] control plane version: v1.31.1
	I0915 07:03:21.029697   26835 api_server.go:131] duration metric: took 8.620595ms to wait for apiserver health ...
	I0915 07:03:21.029704   26835 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 07:03:21.200146   26835 request.go:632] Waited for 170.344732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:03:21.200201   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:03:21.200207   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:21.200214   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:21.200219   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:21.205210   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:03:21.209322   26835 system_pods.go:59] 17 kube-system pods found
	I0915 07:03:21.209349   26835 system_pods.go:61] "coredns-7c65d6cfc9-4w6x7" [b61b0aa7-48e9-4746-b2e9-d205b96fe557] Running
	I0915 07:03:21.209355   26835 system_pods.go:61] "coredns-7c65d6cfc9-lpj44" [a4a8f34c-c73f-411b-9773-18e274a3987f] Running
	I0915 07:03:21.209358   26835 system_pods.go:61] "etcd-ha-670527" [d7fd260a-bb00-4f30-8e27-ae79ab568428] Running
	I0915 07:03:21.209362   26835 system_pods.go:61] "etcd-ha-670527-m02" [91839d6d-2280-4850-bc47-0de42a8bd3ee] Running
	I0915 07:03:21.209365   26835 system_pods.go:61] "kindnet-6sqhd" [8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2] Running
	I0915 07:03:21.209369   26835 system_pods.go:61] "kindnet-mn54b" [c413e4cd-9033-4f8d-ac98-5a641b14fe78] Running
	I0915 07:03:21.209372   26835 system_pods.go:61] "kube-apiserver-ha-670527" [2da91baa-de79-4304-9256-45771efa0825] Running
	I0915 07:03:21.209375   26835 system_pods.go:61] "kube-apiserver-ha-670527-m02" [406bb0a9-8e75-41c9-8f88-10d10b8fb327] Running
	I0915 07:03:21.209378   26835 system_pods.go:61] "kube-controller-manager-ha-670527" [aa981100-fd20-40e8-8449-b4332efc086d] Running
	I0915 07:03:21.209381   26835 system_pods.go:61] "kube-controller-manager-ha-670527-m02" [0e833c15-24c8-4a35-8c4e-58fe1eaa6600] Running
	I0915 07:03:21.209384   26835 system_pods.go:61] "kube-proxy-25xtk" [c9955046-49ba-426d-9377-8d3e02fd3f37] Running
	I0915 07:03:21.209386   26835 system_pods.go:61] "kube-proxy-kt79t" [9ae503da-976f-4f63-9a70-c1899bb990e7] Running
	I0915 07:03:21.209389   26835 system_pods.go:61] "kube-scheduler-ha-670527" [085277d2-c1ce-4a47-9b73-47961e3d13d9] Running
	I0915 07:03:21.209393   26835 system_pods.go:61] "kube-scheduler-ha-670527-m02" [a88ee5e5-13cb-4160-b654-0af177d55cd5] Running
	I0915 07:03:21.209399   26835 system_pods.go:61] "kube-vip-ha-670527" [3ad87a12-7eca-44cb-8b2f-df38f92d8e4d] Running
	I0915 07:03:21.209402   26835 system_pods.go:61] "kube-vip-ha-670527-m02" [c02df8e9-056b-4028-9af5-1c4b8e42e780] Running
	I0915 07:03:21.209404   26835 system_pods.go:61] "storage-provisioner" [62afc380-282c-4392-9ff9-7531ab5e74d1] Running
	I0915 07:03:21.209410   26835 system_pods.go:74] duration metric: took 179.701914ms to wait for pod list to return data ...
	I0915 07:03:21.209420   26835 default_sa.go:34] waiting for default service account to be created ...
	I0915 07:03:21.399918   26835 request.go:632] Waited for 190.415031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0915 07:03:21.399974   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0915 07:03:21.399979   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:21.399993   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:21.399998   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:21.404183   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:03:21.404968   26835 default_sa.go:45] found service account: "default"
	I0915 07:03:21.404990   26835 default_sa.go:55] duration metric: took 195.564704ms for default service account to be created ...
	I0915 07:03:21.405001   26835 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 07:03:21.599538   26835 request.go:632] Waited for 194.456381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:03:21.599591   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:03:21.599596   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:21.599606   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:21.599610   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:21.604857   26835 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:03:21.608974   26835 system_pods.go:86] 17 kube-system pods found
	I0915 07:03:21.609002   26835 system_pods.go:89] "coredns-7c65d6cfc9-4w6x7" [b61b0aa7-48e9-4746-b2e9-d205b96fe557] Running
	I0915 07:03:21.609010   26835 system_pods.go:89] "coredns-7c65d6cfc9-lpj44" [a4a8f34c-c73f-411b-9773-18e274a3987f] Running
	I0915 07:03:21.609016   26835 system_pods.go:89] "etcd-ha-670527" [d7fd260a-bb00-4f30-8e27-ae79ab568428] Running
	I0915 07:03:21.609021   26835 system_pods.go:89] "etcd-ha-670527-m02" [91839d6d-2280-4850-bc47-0de42a8bd3ee] Running
	I0915 07:03:21.609026   26835 system_pods.go:89] "kindnet-6sqhd" [8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2] Running
	I0915 07:03:21.609030   26835 system_pods.go:89] "kindnet-mn54b" [c413e4cd-9033-4f8d-ac98-5a641b14fe78] Running
	I0915 07:03:21.609036   26835 system_pods.go:89] "kube-apiserver-ha-670527" [2da91baa-de79-4304-9256-45771efa0825] Running
	I0915 07:03:21.609042   26835 system_pods.go:89] "kube-apiserver-ha-670527-m02" [406bb0a9-8e75-41c9-8f88-10d10b8fb327] Running
	I0915 07:03:21.609048   26835 system_pods.go:89] "kube-controller-manager-ha-670527" [aa981100-fd20-40e8-8449-b4332efc086d] Running
	I0915 07:03:21.609054   26835 system_pods.go:89] "kube-controller-manager-ha-670527-m02" [0e833c15-24c8-4a35-8c4e-58fe1eaa6600] Running
	I0915 07:03:21.609061   26835 system_pods.go:89] "kube-proxy-25xtk" [c9955046-49ba-426d-9377-8d3e02fd3f37] Running
	I0915 07:03:21.609069   26835 system_pods.go:89] "kube-proxy-kt79t" [9ae503da-976f-4f63-9a70-c1899bb990e7] Running
	I0915 07:03:21.609075   26835 system_pods.go:89] "kube-scheduler-ha-670527" [085277d2-c1ce-4a47-9b73-47961e3d13d9] Running
	I0915 07:03:21.609081   26835 system_pods.go:89] "kube-scheduler-ha-670527-m02" [a88ee5e5-13cb-4160-b654-0af177d55cd5] Running
	I0915 07:03:21.609087   26835 system_pods.go:89] "kube-vip-ha-670527" [3ad87a12-7eca-44cb-8b2f-df38f92d8e4d] Running
	I0915 07:03:21.609093   26835 system_pods.go:89] "kube-vip-ha-670527-m02" [c02df8e9-056b-4028-9af5-1c4b8e42e780] Running
	I0915 07:03:21.609099   26835 system_pods.go:89] "storage-provisioner" [62afc380-282c-4392-9ff9-7531ab5e74d1] Running
	I0915 07:03:21.609110   26835 system_pods.go:126] duration metric: took 204.103519ms to wait for k8s-apps to be running ...
	I0915 07:03:21.609130   26835 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 07:03:21.609180   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:03:21.625587   26835 system_svc.go:56] duration metric: took 16.446998ms WaitForService to wait for kubelet
	I0915 07:03:21.625619   26835 kubeadm.go:582] duration metric: took 23.623022618s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:03:21.625636   26835 node_conditions.go:102] verifying NodePressure condition ...
	I0915 07:03:21.800081   26835 request.go:632] Waited for 174.329572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes
	I0915 07:03:21.800145   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes
	I0915 07:03:21.800153   26835 round_trippers.go:469] Request Headers:
	I0915 07:03:21.800164   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:03:21.800174   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:03:21.804174   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:03:21.804998   26835 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 07:03:21.805032   26835 node_conditions.go:123] node cpu capacity is 2
	I0915 07:03:21.805052   26835 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 07:03:21.805057   26835 node_conditions.go:123] node cpu capacity is 2
	I0915 07:03:21.805063   26835 node_conditions.go:105] duration metric: took 179.422133ms to run NodePressure ...
	I0915 07:03:21.805076   26835 start.go:241] waiting for startup goroutines ...
	I0915 07:03:21.805110   26835 start.go:255] writing updated cluster config ...
	I0915 07:03:21.807329   26835 out.go:201] 
	I0915 07:03:21.808633   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:03:21.808730   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:03:21.810021   26835 out.go:177] * Starting "ha-670527-m03" control-plane node in "ha-670527" cluster
	I0915 07:03:21.811002   26835 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:03:21.811018   26835 cache.go:56] Caching tarball of preloaded images
	I0915 07:03:21.811099   26835 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:03:21.811110   26835 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:03:21.811213   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:03:21.811397   26835 start.go:360] acquireMachinesLock for ha-670527-m03: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:03:21.811447   26835 start.go:364] duration metric: took 30.463µs to acquireMachinesLock for "ha-670527-m03"
	I0915 07:03:21.811468   26835 start.go:93] Provisioning new machine with config: &{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:03:21.811593   26835 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0915 07:03:21.813055   26835 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 07:03:21.813128   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:03:21.813160   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:03:21.827379   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34497
	I0915 07:03:21.827819   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:03:21.828285   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:03:21.828304   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:03:21.828594   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:03:21.828770   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetMachineName
	I0915 07:03:21.828896   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:21.829034   26835 start.go:159] libmachine.API.Create for "ha-670527" (driver="kvm2")
	I0915 07:03:21.829062   26835 client.go:168] LocalClient.Create starting
	I0915 07:03:21.829084   26835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem
	I0915 07:03:21.829112   26835 main.go:141] libmachine: Decoding PEM data...
	I0915 07:03:21.829125   26835 main.go:141] libmachine: Parsing certificate...
	I0915 07:03:21.829180   26835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem
	I0915 07:03:21.829198   26835 main.go:141] libmachine: Decoding PEM data...
	I0915 07:03:21.829208   26835 main.go:141] libmachine: Parsing certificate...
	I0915 07:03:21.829220   26835 main.go:141] libmachine: Running pre-create checks...
	I0915 07:03:21.829228   26835 main.go:141] libmachine: (ha-670527-m03) Calling .PreCreateCheck
	I0915 07:03:21.829350   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetConfigRaw
	I0915 07:03:21.829692   26835 main.go:141] libmachine: Creating machine...
	I0915 07:03:21.829705   26835 main.go:141] libmachine: (ha-670527-m03) Calling .Create
	I0915 07:03:21.829836   26835 main.go:141] libmachine: (ha-670527-m03) Creating KVM machine...
	I0915 07:03:21.830982   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found existing default KVM network
	I0915 07:03:21.831136   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found existing private KVM network mk-ha-670527
	I0915 07:03:21.831307   26835 main.go:141] libmachine: (ha-670527-m03) Setting up store path in /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03 ...
	I0915 07:03:21.831328   26835 main.go:141] libmachine: (ha-670527-m03) Building disk image from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso
	I0915 07:03:21.831398   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:21.831307   27575 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:03:21.831470   26835 main.go:141] libmachine: (ha-670527-m03) Downloading /home/jenkins/minikube-integration/19644-6166/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso...
	I0915 07:03:22.066896   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:22.066781   27575 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa...
	I0915 07:03:22.155557   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:22.155430   27575 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/ha-670527-m03.rawdisk...
	I0915 07:03:22.155590   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Writing magic tar header
	I0915 07:03:22.155600   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Writing SSH key tar header
	I0915 07:03:22.155608   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:22.155559   27575 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03 ...
	I0915 07:03:22.155677   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03
	I0915 07:03:22.155713   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines
	I0915 07:03:22.155729   26835 main.go:141] libmachine: (ha-670527-m03) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03 (perms=drwx------)
	I0915 07:03:22.155739   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:03:22.155750   26835 main.go:141] libmachine: (ha-670527-m03) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines (perms=drwxr-xr-x)
	I0915 07:03:22.155763   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166
	I0915 07:03:22.155775   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0915 07:03:22.155780   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Checking permissions on dir: /home/jenkins
	I0915 07:03:22.155786   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Checking permissions on dir: /home
	I0915 07:03:22.155793   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Skipping /home - not owner
	I0915 07:03:22.155841   26835 main.go:141] libmachine: (ha-670527-m03) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube (perms=drwxr-xr-x)
	I0915 07:03:22.155872   26835 main.go:141] libmachine: (ha-670527-m03) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166 (perms=drwxrwxr-x)
	I0915 07:03:22.155894   26835 main.go:141] libmachine: (ha-670527-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0915 07:03:22.155910   26835 main.go:141] libmachine: (ha-670527-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0915 07:03:22.155933   26835 main.go:141] libmachine: (ha-670527-m03) Creating domain...
	I0915 07:03:22.156757   26835 main.go:141] libmachine: (ha-670527-m03) define libvirt domain using xml: 
	I0915 07:03:22.156768   26835 main.go:141] libmachine: (ha-670527-m03) <domain type='kvm'>
	I0915 07:03:22.156795   26835 main.go:141] libmachine: (ha-670527-m03)   <name>ha-670527-m03</name>
	I0915 07:03:22.156819   26835 main.go:141] libmachine: (ha-670527-m03)   <memory unit='MiB'>2200</memory>
	I0915 07:03:22.156847   26835 main.go:141] libmachine: (ha-670527-m03)   <vcpu>2</vcpu>
	I0915 07:03:22.156869   26835 main.go:141] libmachine: (ha-670527-m03)   <features>
	I0915 07:03:22.156892   26835 main.go:141] libmachine: (ha-670527-m03)     <acpi/>
	I0915 07:03:22.156902   26835 main.go:141] libmachine: (ha-670527-m03)     <apic/>
	I0915 07:03:22.156909   26835 main.go:141] libmachine: (ha-670527-m03)     <pae/>
	I0915 07:03:22.156915   26835 main.go:141] libmachine: (ha-670527-m03)     
	I0915 07:03:22.156922   26835 main.go:141] libmachine: (ha-670527-m03)   </features>
	I0915 07:03:22.156933   26835 main.go:141] libmachine: (ha-670527-m03)   <cpu mode='host-passthrough'>
	I0915 07:03:22.156940   26835 main.go:141] libmachine: (ha-670527-m03)   
	I0915 07:03:22.156950   26835 main.go:141] libmachine: (ha-670527-m03)   </cpu>
	I0915 07:03:22.156969   26835 main.go:141] libmachine: (ha-670527-m03)   <os>
	I0915 07:03:22.156987   26835 main.go:141] libmachine: (ha-670527-m03)     <type>hvm</type>
	I0915 07:03:22.156999   26835 main.go:141] libmachine: (ha-670527-m03)     <boot dev='cdrom'/>
	I0915 07:03:22.157009   26835 main.go:141] libmachine: (ha-670527-m03)     <boot dev='hd'/>
	I0915 07:03:22.157017   26835 main.go:141] libmachine: (ha-670527-m03)     <bootmenu enable='no'/>
	I0915 07:03:22.157025   26835 main.go:141] libmachine: (ha-670527-m03)   </os>
	I0915 07:03:22.157035   26835 main.go:141] libmachine: (ha-670527-m03)   <devices>
	I0915 07:03:22.157043   26835 main.go:141] libmachine: (ha-670527-m03)     <disk type='file' device='cdrom'>
	I0915 07:03:22.157056   26835 main.go:141] libmachine: (ha-670527-m03)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/boot2docker.iso'/>
	I0915 07:03:22.157071   26835 main.go:141] libmachine: (ha-670527-m03)       <target dev='hdc' bus='scsi'/>
	I0915 07:03:22.157083   26835 main.go:141] libmachine: (ha-670527-m03)       <readonly/>
	I0915 07:03:22.157093   26835 main.go:141] libmachine: (ha-670527-m03)     </disk>
	I0915 07:03:22.157102   26835 main.go:141] libmachine: (ha-670527-m03)     <disk type='file' device='disk'>
	I0915 07:03:22.157114   26835 main.go:141] libmachine: (ha-670527-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0915 07:03:22.157127   26835 main.go:141] libmachine: (ha-670527-m03)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/ha-670527-m03.rawdisk'/>
	I0915 07:03:22.157135   26835 main.go:141] libmachine: (ha-670527-m03)       <target dev='hda' bus='virtio'/>
	I0915 07:03:22.157154   26835 main.go:141] libmachine: (ha-670527-m03)     </disk>
	I0915 07:03:22.157170   26835 main.go:141] libmachine: (ha-670527-m03)     <interface type='network'>
	I0915 07:03:22.157182   26835 main.go:141] libmachine: (ha-670527-m03)       <source network='mk-ha-670527'/>
	I0915 07:03:22.157192   26835 main.go:141] libmachine: (ha-670527-m03)       <model type='virtio'/>
	I0915 07:03:22.157200   26835 main.go:141] libmachine: (ha-670527-m03)     </interface>
	I0915 07:03:22.157207   26835 main.go:141] libmachine: (ha-670527-m03)     <interface type='network'>
	I0915 07:03:22.157213   26835 main.go:141] libmachine: (ha-670527-m03)       <source network='default'/>
	I0915 07:03:22.157219   26835 main.go:141] libmachine: (ha-670527-m03)       <model type='virtio'/>
	I0915 07:03:22.157224   26835 main.go:141] libmachine: (ha-670527-m03)     </interface>
	I0915 07:03:22.157228   26835 main.go:141] libmachine: (ha-670527-m03)     <serial type='pty'>
	I0915 07:03:22.157233   26835 main.go:141] libmachine: (ha-670527-m03)       <target port='0'/>
	I0915 07:03:22.157242   26835 main.go:141] libmachine: (ha-670527-m03)     </serial>
	I0915 07:03:22.157249   26835 main.go:141] libmachine: (ha-670527-m03)     <console type='pty'>
	I0915 07:03:22.157254   26835 main.go:141] libmachine: (ha-670527-m03)       <target type='serial' port='0'/>
	I0915 07:03:22.157261   26835 main.go:141] libmachine: (ha-670527-m03)     </console>
	I0915 07:03:22.157265   26835 main.go:141] libmachine: (ha-670527-m03)     <rng model='virtio'>
	I0915 07:03:22.157274   26835 main.go:141] libmachine: (ha-670527-m03)       <backend model='random'>/dev/random</backend>
	I0915 07:03:22.157280   26835 main.go:141] libmachine: (ha-670527-m03)     </rng>
	I0915 07:03:22.157286   26835 main.go:141] libmachine: (ha-670527-m03)     
	I0915 07:03:22.157290   26835 main.go:141] libmachine: (ha-670527-m03)     
	I0915 07:03:22.157297   26835 main.go:141] libmachine: (ha-670527-m03)   </devices>
	I0915 07:03:22.157301   26835 main.go:141] libmachine: (ha-670527-m03) </domain>
	I0915 07:03:22.157310   26835 main.go:141] libmachine: (ha-670527-m03) 
	I0915 07:03:22.163801   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:1a:da:d7 in network default
	I0915 07:03:22.164325   26835 main.go:141] libmachine: (ha-670527-m03) Ensuring networks are active...
	I0915 07:03:22.164341   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:22.165076   26835 main.go:141] libmachine: (ha-670527-m03) Ensuring network default is active
	I0915 07:03:22.165412   26835 main.go:141] libmachine: (ha-670527-m03) Ensuring network mk-ha-670527 is active
	I0915 07:03:22.165935   26835 main.go:141] libmachine: (ha-670527-m03) Getting domain xml...
	I0915 07:03:22.166619   26835 main.go:141] libmachine: (ha-670527-m03) Creating domain...
	I0915 07:03:23.403097   26835 main.go:141] libmachine: (ha-670527-m03) Waiting to get IP...
	I0915 07:03:23.404077   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:23.404560   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:23.404596   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:23.404542   27575 retry.go:31] will retry after 216.027867ms: waiting for machine to come up
	I0915 07:03:23.622217   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:23.622712   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:23.622739   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:23.622650   27575 retry.go:31] will retry after 379.106761ms: waiting for machine to come up
	I0915 07:03:24.002939   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:24.003411   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:24.003467   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:24.003366   27575 retry.go:31] will retry after 293.965798ms: waiting for machine to come up
	I0915 07:03:24.298820   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:24.299267   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:24.299299   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:24.299222   27575 retry.go:31] will retry after 496.993891ms: waiting for machine to come up
	I0915 07:03:24.798010   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:24.798485   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:24.798512   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:24.798399   27575 retry.go:31] will retry after 681.561294ms: waiting for machine to come up
	I0915 07:03:25.481130   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:25.481859   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:25.481880   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:25.481822   27575 retry.go:31] will retry after 816.437613ms: waiting for machine to come up
	I0915 07:03:26.299463   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:26.299923   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:26.299949   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:26.299880   27575 retry.go:31] will retry after 933.139751ms: waiting for machine to come up
	I0915 07:03:27.234824   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:27.235283   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:27.235305   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:27.235231   27575 retry.go:31] will retry after 1.01772382s: waiting for machine to come up
	I0915 07:03:28.254301   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:28.254706   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:28.254734   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:28.254660   27575 retry.go:31] will retry after 1.647555623s: waiting for machine to come up
	I0915 07:03:29.904388   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:29.904947   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:29.904974   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:29.904882   27575 retry.go:31] will retry after 1.501301991s: waiting for machine to come up
	I0915 07:03:31.407599   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:31.407990   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:31.408023   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:31.407965   27575 retry.go:31] will retry after 1.860767384s: waiting for machine to come up
	I0915 07:03:33.270491   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:33.271016   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:33.271038   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:33.270970   27575 retry.go:31] will retry after 2.482506082s: waiting for machine to come up
	I0915 07:03:35.756546   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:35.756901   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:35.756923   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:35.756875   27575 retry.go:31] will retry after 3.598234046s: waiting for machine to come up
	I0915 07:03:39.356217   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:39.356615   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find current IP address of domain ha-670527-m03 in network mk-ha-670527
	I0915 07:03:39.356642   26835 main.go:141] libmachine: (ha-670527-m03) DBG | I0915 07:03:39.356579   27575 retry.go:31] will retry after 5.569722625s: waiting for machine to come up
	I0915 07:03:44.930420   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:44.930911   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has current primary IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:44.930935   26835 main.go:141] libmachine: (ha-670527-m03) Found IP for machine: 192.168.39.4
	I0915 07:03:44.930946   26835 main.go:141] libmachine: (ha-670527-m03) Reserving static IP address...
	I0915 07:03:44.931285   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find host DHCP lease matching {name: "ha-670527-m03", mac: "52:54:00:b4:8f:a3", ip: "192.168.39.4"} in network mk-ha-670527
	I0915 07:03:45.003476   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Getting to WaitForSSH function...
	I0915 07:03:45.003534   26835 main.go:141] libmachine: (ha-670527-m03) Reserved static IP address: 192.168.39.4
	I0915 07:03:45.003549   26835 main.go:141] libmachine: (ha-670527-m03) Waiting for SSH to be available...
	I0915 07:03:45.007019   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:45.007412   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527
	I0915 07:03:45.007429   26835 main.go:141] libmachine: (ha-670527-m03) DBG | unable to find defined IP address of network mk-ha-670527 interface with MAC address 52:54:00:b4:8f:a3
	I0915 07:03:45.007648   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Using SSH client type: external
	I0915 07:03:45.007670   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa (-rw-------)
	I0915 07:03:45.007766   26835 main.go:141] libmachine: (ha-670527-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 07:03:45.007792   26835 main.go:141] libmachine: (ha-670527-m03) DBG | About to run SSH command:
	I0915 07:03:45.007810   26835 main.go:141] libmachine: (ha-670527-m03) DBG | exit 0
	I0915 07:03:45.011926   26835 main.go:141] libmachine: (ha-670527-m03) DBG | SSH cmd err, output: exit status 255: 
	I0915 07:03:45.011947   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0915 07:03:45.011953   26835 main.go:141] libmachine: (ha-670527-m03) DBG | command : exit 0
	I0915 07:03:45.011959   26835 main.go:141] libmachine: (ha-670527-m03) DBG | err     : exit status 255
	I0915 07:03:45.011968   26835 main.go:141] libmachine: (ha-670527-m03) DBG | output  : 
	I0915 07:03:48.012738   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Getting to WaitForSSH function...
	I0915 07:03:48.015038   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.015551   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.015578   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.015738   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Using SSH client type: external
	I0915 07:03:48.015767   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa (-rw-------)
	I0915 07:03:48.015797   26835 main.go:141] libmachine: (ha-670527-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 07:03:48.015808   26835 main.go:141] libmachine: (ha-670527-m03) DBG | About to run SSH command:
	I0915 07:03:48.015816   26835 main.go:141] libmachine: (ha-670527-m03) DBG | exit 0
	I0915 07:03:48.142223   26835 main.go:141] libmachine: (ha-670527-m03) DBG | SSH cmd err, output: <nil>: 
	I0915 07:03:48.142476   26835 main.go:141] libmachine: (ha-670527-m03) KVM machine creation complete!
	I0915 07:03:48.142743   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetConfigRaw
	I0915 07:03:48.143269   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:48.143488   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:48.143647   26835 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0915 07:03:48.143661   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetState
	I0915 07:03:48.144969   26835 main.go:141] libmachine: Detecting operating system of created instance...
	I0915 07:03:48.144982   26835 main.go:141] libmachine: Waiting for SSH to be available...
	I0915 07:03:48.144987   26835 main.go:141] libmachine: Getting to WaitForSSH function...
	I0915 07:03:48.144992   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:48.147561   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.147967   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.147991   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.148171   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:48.148364   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.148516   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.148675   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:48.148859   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:03:48.149064   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0915 07:03:48.149077   26835 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0915 07:03:48.253160   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:03:48.253181   26835 main.go:141] libmachine: Detecting the provisioner...
	I0915 07:03:48.253191   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:48.255898   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.256218   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.256244   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.256420   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:48.256602   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.256718   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.256824   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:48.256964   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:03:48.257165   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0915 07:03:48.257182   26835 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0915 07:03:48.371163   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0915 07:03:48.371239   26835 main.go:141] libmachine: found compatible host: buildroot
	I0915 07:03:48.371252   26835 main.go:141] libmachine: Provisioning with buildroot...
	I0915 07:03:48.371265   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetMachineName
	I0915 07:03:48.371477   26835 buildroot.go:166] provisioning hostname "ha-670527-m03"
	I0915 07:03:48.371504   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetMachineName
	I0915 07:03:48.371749   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:48.374417   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.374809   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.374831   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.374995   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:48.375167   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.375322   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.375450   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:48.375564   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:03:48.375715   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0915 07:03:48.375728   26835 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-670527-m03 && echo "ha-670527-m03" | sudo tee /etc/hostname
	I0915 07:03:48.498262   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-670527-m03
	
	I0915 07:03:48.498289   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:48.501040   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.501426   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.501450   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.501640   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:48.501829   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.501978   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.502080   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:48.502247   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:03:48.502410   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0915 07:03:48.502424   26835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-670527-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-670527-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-670527-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:03:48.618930   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:03:48.618954   26835 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 07:03:48.618969   26835 buildroot.go:174] setting up certificates
	I0915 07:03:48.618977   26835 provision.go:84] configureAuth start
	I0915 07:03:48.618986   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetMachineName
	I0915 07:03:48.619195   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:03:48.621841   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.622193   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.622219   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.622363   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:48.624411   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.624732   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.624754   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.624889   26835 provision.go:143] copyHostCerts
	I0915 07:03:48.624917   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:03:48.624951   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 07:03:48.624960   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:03:48.625023   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 07:03:48.625088   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:03:48.625102   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 07:03:48.625106   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:03:48.625130   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 07:03:48.625168   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:03:48.625185   26835 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 07:03:48.625191   26835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:03:48.625218   26835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 07:03:48.625265   26835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.ha-670527-m03 san=[127.0.0.1 192.168.39.4 ha-670527-m03 localhost minikube]
	I0915 07:03:48.959609   26835 provision.go:177] copyRemoteCerts
	I0915 07:03:48.959660   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:03:48.959689   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:48.962324   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.962696   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:48.962724   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:48.962853   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:48.963056   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:48.963218   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:48.963371   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:03:49.048634   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0915 07:03:49.048700   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:03:49.076049   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0915 07:03:49.076122   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 07:03:49.100276   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0915 07:03:49.100358   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 07:03:49.124687   26835 provision.go:87] duration metric: took 505.698463ms to configureAuth
	I0915 07:03:49.124710   26835 buildroot.go:189] setting minikube options for container-runtime
	I0915 07:03:49.124903   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:03:49.124986   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:49.127619   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.127977   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.128007   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.128285   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:49.128496   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:49.128692   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:49.128855   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:49.129024   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:03:49.129184   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0915 07:03:49.129197   26835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:03:49.365319   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:03:49.365345   26835 main.go:141] libmachine: Checking connection to Docker...
	I0915 07:03:49.365355   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetURL
	I0915 07:03:49.366513   26835 main.go:141] libmachine: (ha-670527-m03) DBG | Using libvirt version 6000000
	I0915 07:03:49.369102   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.369512   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.369537   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.369706   26835 main.go:141] libmachine: Docker is up and running!
	I0915 07:03:49.369724   26835 main.go:141] libmachine: Reticulating splines...
	I0915 07:03:49.369730   26835 client.go:171] duration metric: took 27.540662889s to LocalClient.Create
	I0915 07:03:49.369751   26835 start.go:167] duration metric: took 27.540717616s to libmachine.API.Create "ha-670527"
	I0915 07:03:49.369760   26835 start.go:293] postStartSetup for "ha-670527-m03" (driver="kvm2")
	I0915 07:03:49.369769   26835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:03:49.369783   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:49.370010   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:03:49.370031   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:49.372171   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.372483   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.372505   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.372708   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:49.372865   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:49.372999   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:49.373120   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:03:49.456135   26835 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:03:49.460434   26835 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 07:03:49.460467   26835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 07:03:49.460531   26835 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 07:03:49.460598   26835 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 07:03:49.460607   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /etc/ssl/certs/131902.pem
	I0915 07:03:49.460684   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:03:49.469981   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:03:49.494580   26835 start.go:296] duration metric: took 124.805677ms for postStartSetup
	I0915 07:03:49.494624   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetConfigRaw
	I0915 07:03:49.495201   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:03:49.498123   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.498539   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.498566   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.498844   26835 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:03:49.499038   26835 start.go:128] duration metric: took 27.687436584s to createHost
	I0915 07:03:49.499059   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:49.501288   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.501633   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.501659   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.501794   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:49.501971   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:49.502132   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:49.502270   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:49.502513   26835 main.go:141] libmachine: Using SSH client type: native
	I0915 07:03:49.502731   26835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0915 07:03:49.502744   26835 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 07:03:49.610848   26835 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726383829.581740186
	
	I0915 07:03:49.610868   26835 fix.go:216] guest clock: 1726383829.581740186
	I0915 07:03:49.610875   26835 fix.go:229] Guest: 2024-09-15 07:03:49.581740186 +0000 UTC Remote: 2024-09-15 07:03:49.499048589 +0000 UTC m=+147.194414259 (delta=82.691597ms)
	I0915 07:03:49.610890   26835 fix.go:200] guest clock delta is within tolerance: 82.691597ms
	I0915 07:03:49.610895   26835 start.go:83] releasing machines lock for "ha-670527-m03", held for 27.799437777s
	I0915 07:03:49.610911   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:49.611135   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:03:49.613829   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.614359   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.614402   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.616877   26835 out.go:177] * Found network options:
	I0915 07:03:49.618062   26835 out.go:177]   - NO_PROXY=192.168.39.54,192.168.39.222
	W0915 07:03:49.619384   26835 proxy.go:119] fail to check proxy env: Error ip not in block
	W0915 07:03:49.619416   26835 proxy.go:119] fail to check proxy env: Error ip not in block
	I0915 07:03:49.619430   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:49.619926   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:49.620102   26835 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:03:49.620204   26835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:03:49.620247   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	W0915 07:03:49.620273   26835 proxy.go:119] fail to check proxy env: Error ip not in block
	W0915 07:03:49.620299   26835 proxy.go:119] fail to check proxy env: Error ip not in block
	I0915 07:03:49.620353   26835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:03:49.620374   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:03:49.623186   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.623399   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.623598   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.623623   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.623783   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:49.623807   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:49.623810   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:49.624008   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:49.624019   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:03:49.624156   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:49.624207   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:03:49.624299   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:03:49.624383   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:03:49.624483   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:03:49.859888   26835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 07:03:49.866143   26835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 07:03:49.866216   26835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:03:49.883052   26835 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 07:03:49.883077   26835 start.go:495] detecting cgroup driver to use...
	I0915 07:03:49.883141   26835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:03:49.899365   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:03:49.913326   26835 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:03:49.913406   26835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:03:49.926614   26835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:03:49.940074   26835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:03:50.051904   26835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:03:50.218228   26835 docker.go:233] disabling docker service ...
	I0915 07:03:50.218298   26835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:03:50.233609   26835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:03:50.246933   26835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:03:50.363927   26835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:03:50.474597   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:03:50.488268   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:03:50.509249   26835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:03:50.509323   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:03:50.519560   26835 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:03:50.519629   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:03:50.529900   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:03:50.540024   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:03:50.550170   26835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:03:50.560551   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:03:50.570254   26835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:03:50.587171   26835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:03:50.597852   26835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:03:50.607246   26835 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0915 07:03:50.607294   26835 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0915 07:03:50.620908   26835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:03:50.630690   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:03:50.746640   26835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:03:50.842040   26835 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:03:50.842123   26835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:03:50.846896   26835 start.go:563] Will wait 60s for crictl version
	I0915 07:03:50.846947   26835 ssh_runner.go:195] Run: which crictl
	I0915 07:03:50.850982   26835 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:03:50.891650   26835 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 07:03:50.891739   26835 ssh_runner.go:195] Run: crio --version
	I0915 07:03:50.920635   26835 ssh_runner.go:195] Run: crio --version
	I0915 07:03:50.951253   26835 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 07:03:50.952707   26835 out.go:177]   - env NO_PROXY=192.168.39.54
	I0915 07:03:50.953929   26835 out.go:177]   - env NO_PROXY=192.168.39.54,192.168.39.222
	I0915 07:03:50.955135   26835 main.go:141] libmachine: (ha-670527-m03) Calling .GetIP
	I0915 07:03:50.957617   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:50.957994   26835 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:03:50.958018   26835 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:03:50.958224   26835 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0915 07:03:50.962558   26835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:03:50.977306   26835 mustload.go:65] Loading cluster: ha-670527
	I0915 07:03:50.977564   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:03:50.977993   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:03:50.978043   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:03:50.993661   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I0915 07:03:50.994126   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:03:50.994612   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:03:50.994634   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:03:50.994903   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:03:50.995067   26835 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:03:50.996695   26835 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:03:50.997003   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:03:50.997045   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:03:51.011921   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35015
	I0915 07:03:51.012422   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:03:51.012901   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:03:51.012917   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:03:51.013217   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:03:51.013376   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:03:51.013532   26835 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527 for IP: 192.168.39.4
	I0915 07:03:51.013544   26835 certs.go:194] generating shared ca certs ...
	I0915 07:03:51.013562   26835 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:03:51.013702   26835 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 07:03:51.013756   26835 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 07:03:51.013776   26835 certs.go:256] generating profile certs ...
	I0915 07:03:51.013897   26835 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key
	I0915 07:03:51.013928   26835 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.ebe1a222
	I0915 07:03:51.013950   26835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.ebe1a222 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.222 192.168.39.4 192.168.39.254]
	I0915 07:03:51.155977   26835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.ebe1a222 ...
	I0915 07:03:51.156004   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.ebe1a222: {Name:mk71e34c696b75e661b03e0c64f1d14a00e75c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:03:51.156167   26835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.ebe1a222 ...
	I0915 07:03:51.156178   26835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.ebe1a222: {Name:mk165e15c7f6cfc7c0d0b32169597c56d3e9f829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:03:51.156248   26835 certs.go:381] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.ebe1a222 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt
	I0915 07:03:51.156378   26835 certs.go:385] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.ebe1a222 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key
	I0915 07:03:51.156511   26835 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key
	I0915 07:03:51.156527   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0915 07:03:51.156541   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0915 07:03:51.156554   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0915 07:03:51.156566   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0915 07:03:51.156578   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0915 07:03:51.156588   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0915 07:03:51.156600   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0915 07:03:51.177878   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0915 07:03:51.177964   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 07:03:51.178041   26835 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 07:03:51.178053   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:03:51.178075   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:03:51.178098   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:03:51.178119   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 07:03:51.178156   26835 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:03:51.178185   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /usr/share/ca-certificates/131902.pem
	I0915 07:03:51.178205   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:03:51.178217   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem -> /usr/share/ca-certificates/13190.pem
	I0915 07:03:51.178245   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:03:51.180867   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:03:51.181258   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:03:51.181287   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:03:51.181436   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:03:51.181641   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:03:51.181782   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:03:51.181922   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:03:51.258165   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0915 07:03:51.263307   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0915 07:03:51.285004   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0915 07:03:51.290822   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0915 07:03:51.302586   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0915 07:03:51.307017   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0915 07:03:51.317901   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0915 07:03:51.322096   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0915 07:03:51.332670   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0915 07:03:51.336604   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0915 07:03:51.352869   26835 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0915 07:03:51.357061   26835 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0915 07:03:51.368097   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:03:51.395494   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:03:51.420698   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:03:51.445874   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 07:03:51.470906   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0915 07:03:51.496181   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 07:03:51.522200   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 07:03:51.547820   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 07:03:51.575355   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 07:03:51.601071   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:03:51.626137   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 07:03:51.650004   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0915 07:03:51.667278   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0915 07:03:51.685002   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0915 07:03:51.702974   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0915 07:03:51.720906   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0915 07:03:51.738527   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0915 07:03:51.754706   26835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0915 07:03:51.774061   26835 ssh_runner.go:195] Run: openssl version
	I0915 07:03:51.779874   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:03:51.790963   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:03:51.795362   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:03:51.795416   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:03:51.801181   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:03:51.812514   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 07:03:51.825177   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 07:03:51.829834   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 07:03:51.829889   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 07:03:51.836542   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 07:03:51.849297   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 07:03:51.862096   26835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 07:03:51.866913   26835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 07:03:51.866974   26835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 07:03:51.873520   26835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:03:51.886365   26835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:03:51.890725   26835 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 07:03:51.890781   26835 kubeadm.go:934] updating node {m03 192.168.39.4 8443 v1.31.1 crio true true} ...
	I0915 07:03:51.890866   26835 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-670527-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:03:51.890893   26835 kube-vip.go:115] generating kube-vip config ...
	I0915 07:03:51.890934   26835 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0915 07:03:51.910815   26835 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0915 07:03:51.910884   26835 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0915 07:03:51.910938   26835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 07:03:51.922823   26835 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0915 07:03:51.922877   26835 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0915 07:03:51.934450   26835 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0915 07:03:51.934461   26835 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0915 07:03:51.934483   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0915 07:03:51.934494   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:03:51.934523   26835 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0915 07:03:51.934541   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0915 07:03:51.934549   26835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0915 07:03:51.934585   26835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0915 07:03:51.952258   26835 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0915 07:03:51.952314   26835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0915 07:03:51.952348   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0915 07:03:51.952354   26835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0915 07:03:51.952392   26835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0915 07:03:51.952416   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0915 07:03:51.983634   26835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0915 07:03:51.983679   26835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0915 07:03:52.820714   26835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0915 07:03:52.831204   26835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0915 07:03:52.849837   26835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:03:52.867416   26835 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0915 07:03:52.885297   26835 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0915 07:03:52.889682   26835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:03:52.905701   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:03:53.023843   26835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:03:53.041789   26835 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:03:53.042126   26835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:03:53.042174   26835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:03:53.057077   26835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44299
	I0915 07:03:53.057609   26835 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:03:53.058160   26835 main.go:141] libmachine: Using API Version  1
	I0915 07:03:53.058185   26835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:03:53.058581   26835 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:03:53.058797   26835 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:03:53.058950   26835 start.go:317] joinCluster: &{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:03:53.059106   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0915 07:03:53.059126   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:03:53.062011   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:03:53.062410   26835 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:03:53.062440   26835 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:03:53.062587   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:03:53.062754   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:03:53.062900   26835 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:03:53.063009   26835 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:03:53.220773   26835 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:03:53.220818   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9b0rgg.sy0fprvhhqv1kkrn --discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-670527-m03 --control-plane --apiserver-advertise-address=192.168.39.4 --apiserver-bind-port=8443"
	I0915 07:04:16.954702   26835 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9b0rgg.sy0fprvhhqv1kkrn --discovery-token-ca-cert-hash sha256:d92a404bbd15cbe219a6bd2e9e46bab5749b094e87ff5f1e08f1e533d5260d2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-670527-m03 --control-plane --apiserver-advertise-address=192.168.39.4 --apiserver-bind-port=8443": (23.733859338s)
	I0915 07:04:16.954740   26835 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0915 07:04:17.545275   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-670527-m03 minikube.k8s.io/updated_at=2024_09_15T07_04_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=ha-670527 minikube.k8s.io/primary=false
	I0915 07:04:17.679917   26835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-670527-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0915 07:04:17.811771   26835 start.go:319] duration metric: took 24.752815611s to joinCluster
	I0915 07:04:17.811839   26835 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:04:17.812342   26835 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:04:17.813412   26835 out.go:177] * Verifying Kubernetes components...
	I0915 07:04:17.814770   26835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:04:18.026111   26835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:04:18.056975   26835 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:04:18.057305   26835 kapi.go:59] client config for ha-670527: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.crt", KeyFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key", CAFile:"/home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0915 07:04:18.057388   26835 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.54:8443
	I0915 07:04:18.057628   26835 node_ready.go:35] waiting up to 6m0s for node "ha-670527-m03" to be "Ready" ...
	I0915 07:04:18.057709   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:18.057720   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:18.057731   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:18.057742   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:18.060730   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:18.558540   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:18.558561   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:18.558570   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:18.558575   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:18.562681   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:04:19.058745   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:19.058767   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:19.058779   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:19.058786   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:19.063064   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:04:19.557966   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:19.557986   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:19.557994   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:19.557998   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:19.561376   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:20.058793   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:20.058811   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:20.058818   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:20.058822   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:20.062366   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:20.063109   26835 node_ready.go:53] node "ha-670527-m03" has status "Ready":"False"
	I0915 07:04:20.558275   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:20.558295   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:20.558303   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:20.558307   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:20.561661   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:21.058412   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:21.058434   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:21.058448   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:21.058455   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:21.061951   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:21.558573   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:21.558595   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:21.558606   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:21.558612   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:21.562273   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:22.058212   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:22.058244   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:22.058255   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:22.058261   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:22.062180   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:22.063257   26835 node_ready.go:53] node "ha-670527-m03" has status "Ready":"False"
	I0915 07:04:22.558344   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:22.558367   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:22.558375   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:22.558378   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:22.562276   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:23.058414   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:23.058433   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:23.058446   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:23.058451   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:23.061901   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:23.557846   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:23.557871   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:23.557880   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:23.557885   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:23.561305   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:24.057956   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:24.057977   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:24.057988   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:24.057992   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:24.061821   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:24.558595   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:24.558613   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:24.558623   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:24.558627   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:24.561698   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:24.562359   26835 node_ready.go:53] node "ha-670527-m03" has status "Ready":"False"
	I0915 07:04:25.058082   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:25.058101   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:25.058108   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:25.058113   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:25.061700   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:25.558247   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:25.558268   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:25.558274   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:25.558277   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:25.561355   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:26.058402   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:26.058429   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:26.058436   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:26.058440   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:26.062379   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:26.557962   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:26.557981   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:26.557989   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:26.557993   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:26.561149   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:27.058781   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:27.058804   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:27.058815   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:27.058822   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:27.062325   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:27.063170   26835 node_ready.go:53] node "ha-670527-m03" has status "Ready":"False"
	I0915 07:04:27.558060   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:27.558084   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:27.558093   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:27.558102   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:27.562217   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:04:28.058215   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:28.058240   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:28.058253   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:28.058259   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:28.063049   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:04:28.558066   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:28.558089   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:28.558097   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:28.558102   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:28.561637   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:29.058380   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:29.058402   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:29.058411   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:29.058415   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:29.071200   26835 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0915 07:04:29.071654   26835 node_ready.go:53] node "ha-670527-m03" has status "Ready":"False"
	I0915 07:04:29.557965   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:29.557986   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:29.557994   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:29.558000   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:29.561665   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:30.057997   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:30.058014   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:30.058022   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:30.058026   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:30.061599   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:30.558552   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:30.558573   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:30.558580   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:30.558583   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:30.561981   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:31.058748   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:31.058771   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:31.058779   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:31.058785   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:31.063276   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:04:31.557993   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:31.558019   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:31.558030   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:31.558036   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:31.562539   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:04:31.563535   26835 node_ready.go:53] node "ha-670527-m03" has status "Ready":"False"
	I0915 07:04:32.058337   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:32.058358   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:32.058367   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:32.058371   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:32.061998   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:32.558690   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:32.558710   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:32.558717   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:32.558722   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:32.562446   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:33.058344   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:33.058370   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:33.058378   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:33.058382   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:33.061651   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:33.557983   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:33.558008   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:33.558018   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:33.558026   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:33.562087   26835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:04:34.057979   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:34.058001   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:34.058010   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:34.058016   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:34.061323   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:34.061914   26835 node_ready.go:53] node "ha-670527-m03" has status "Ready":"False"
	I0915 07:04:34.557991   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:34.558015   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:34.558026   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:34.558031   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:34.561284   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:35.058494   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:35.058512   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:35.058519   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:35.058522   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:35.061655   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:35.558484   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:35.558504   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:35.558519   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:35.558525   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:35.561562   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:36.058223   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:36.058243   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.058254   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.058259   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.061088   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.061775   26835 node_ready.go:49] node "ha-670527-m03" has status "Ready":"True"
	I0915 07:04:36.061792   26835 node_ready.go:38] duration metric: took 18.004148589s for node "ha-670527-m03" to be "Ready" ...
	I0915 07:04:36.061800   26835 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:04:36.061888   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:04:36.061899   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.061905   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.061909   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.067789   26835 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:04:36.073680   26835 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4w6x7" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.073746   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4w6x7
	I0915 07:04:36.073754   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.073761   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.073764   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.076482   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.077149   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:36.077166   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.077176   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.077186   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.079923   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.080334   26835 pod_ready.go:93] pod "coredns-7c65d6cfc9-4w6x7" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:36.080348   26835 pod_ready.go:82] duration metric: took 6.647941ms for pod "coredns-7c65d6cfc9-4w6x7" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.080356   26835 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lpj44" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.080399   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-lpj44
	I0915 07:04:36.080407   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.080413   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.080418   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.082754   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.083507   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:36.083522   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.083529   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.083533   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.085737   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.086257   26835 pod_ready.go:93] pod "coredns-7c65d6cfc9-lpj44" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:36.086273   26835 pod_ready.go:82] duration metric: took 5.912191ms for pod "coredns-7c65d6cfc9-lpj44" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.086281   26835 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.086331   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-670527
	I0915 07:04:36.086338   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.086345   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.086349   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.088849   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.089335   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:36.089346   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.089353   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.089359   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.091932   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.092398   26835 pod_ready.go:93] pod "etcd-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:36.092412   26835 pod_ready.go:82] duration metric: took 6.124711ms for pod "etcd-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.092421   26835 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.092473   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-670527-m02
	I0915 07:04:36.092482   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.092492   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.092500   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.094908   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.095587   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:36.095603   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.095614   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.095622   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.098307   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.098726   26835 pod_ready.go:93] pod "etcd-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:36.098740   26835 pod_ready.go:82] duration metric: took 6.312184ms for pod "etcd-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.098749   26835 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.258989   26835 request.go:632] Waited for 160.18431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-670527-m03
	I0915 07:04:36.259053   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-670527-m03
	I0915 07:04:36.259061   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.259068   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.259072   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.263000   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:36.458977   26835 request.go:632] Waited for 195.220619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:36.459049   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:36.459055   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.459062   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.459065   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.462070   26835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:04:36.462736   26835 pod_ready.go:93] pod "etcd-ha-670527-m03" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:36.462752   26835 pod_ready.go:82] duration metric: took 363.99652ms for pod "etcd-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.462775   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.658952   26835 request.go:632] Waited for 196.114758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527
	I0915 07:04:36.659017   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527
	I0915 07:04:36.659025   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.659034   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.659049   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.662171   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:36.858552   26835 request.go:632] Waited for 195.468363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:36.858603   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:36.858608   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:36.858614   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:36.858618   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:36.861831   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:36.862334   26835 pod_ready.go:93] pod "kube-apiserver-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:36.862360   26835 pod_ready.go:82] duration metric: took 399.566105ms for pod "kube-apiserver-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:36.862372   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:37.058493   26835 request.go:632] Waited for 196.021944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527-m02
	I0915 07:04:37.058545   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527-m02
	I0915 07:04:37.058550   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:37.058557   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:37.058561   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:37.061803   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:37.258993   26835 request.go:632] Waited for 196.205305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:37.259041   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:37.259046   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:37.259052   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:37.259056   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:37.262105   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:37.262674   26835 pod_ready.go:93] pod "kube-apiserver-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:37.262691   26835 pod_ready.go:82] duration metric: took 400.311953ms for pod "kube-apiserver-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:37.262700   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:37.459255   26835 request.go:632] Waited for 196.501925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527-m03
	I0915 07:04:37.459300   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-670527-m03
	I0915 07:04:37.459305   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:37.459316   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:37.459321   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:37.462842   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:37.659015   26835 request.go:632] Waited for 195.36074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:37.659089   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:37.659098   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:37.659110   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:37.659117   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:37.662639   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:37.663138   26835 pod_ready.go:93] pod "kube-apiserver-ha-670527-m03" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:37.663160   26835 pod_ready.go:82] duration metric: took 400.452596ms for pod "kube-apiserver-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:37.663173   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:37.858296   26835 request.go:632] Waited for 195.060423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527
	I0915 07:04:37.858391   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527
	I0915 07:04:37.858401   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:37.858411   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:37.858419   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:37.861773   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:38.058751   26835 request.go:632] Waited for 196.320861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:38.058837   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:38.058849   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:38.058860   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:38.058868   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:38.062211   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:38.062919   26835 pod_ready.go:93] pod "kube-controller-manager-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:38.062935   26835 pod_ready.go:82] duration metric: took 399.755157ms for pod "kube-controller-manager-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:38.062944   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:38.258261   26835 request.go:632] Waited for 195.259507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527-m02
	I0915 07:04:38.258319   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527-m02
	I0915 07:04:38.258324   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:38.258332   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:38.258335   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:38.261550   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:38.458682   26835 request.go:632] Waited for 196.148029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:38.458747   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:38.458753   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:38.458760   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:38.458765   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:38.461968   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:38.462530   26835 pod_ready.go:93] pod "kube-controller-manager-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:38.462553   26835 pod_ready.go:82] duration metric: took 399.602164ms for pod "kube-controller-manager-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:38.462566   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:38.658641   26835 request.go:632] Waited for 196.007932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527-m03
	I0915 07:04:38.658716   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-670527-m03
	I0915 07:04:38.658722   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:38.658730   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:38.658761   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:38.662366   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:38.858380   26835 request.go:632] Waited for 195.281768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:38.858432   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:38.858437   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:38.858444   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:38.858449   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:38.862305   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:38.863021   26835 pod_ready.go:93] pod "kube-controller-manager-ha-670527-m03" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:38.863036   26835 pod_ready.go:82] duration metric: took 400.460329ms for pod "kube-controller-manager-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:38.863046   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25xtk" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:39.059255   26835 request.go:632] Waited for 196.150242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25xtk
	I0915 07:04:39.059312   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-25xtk
	I0915 07:04:39.059318   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:39.059325   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:39.059329   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:39.062619   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:39.258834   26835 request.go:632] Waited for 195.358373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:39.258890   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:39.258897   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:39.258907   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:39.258912   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:39.262536   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:39.263146   26835 pod_ready.go:93] pod "kube-proxy-25xtk" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:39.263163   26835 pod_ready.go:82] duration metric: took 400.111553ms for pod "kube-proxy-25xtk" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:39.263172   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kt79t" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:39.458302   26835 request.go:632] Waited for 195.0497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kt79t
	I0915 07:04:39.458353   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kt79t
	I0915 07:04:39.458358   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:39.458365   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:39.458367   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:39.461983   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:39.659280   26835 request.go:632] Waited for 196.352701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:39.659339   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:39.659344   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:39.659351   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:39.659355   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:39.662770   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:39.663296   26835 pod_ready.go:93] pod "kube-proxy-kt79t" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:39.663313   26835 pod_ready.go:82] duration metric: took 400.135176ms for pod "kube-proxy-kt79t" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:39.663322   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mbcxc" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:39.858495   26835 request.go:632] Waited for 195.117993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mbcxc
	I0915 07:04:39.858570   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mbcxc
	I0915 07:04:39.858578   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:39.858585   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:39.858589   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:39.862321   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:40.058290   26835 request.go:632] Waited for 195.193866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:40.058338   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:40.058345   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:40.058354   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:40.058362   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:40.061568   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:40.062156   26835 pod_ready.go:93] pod "kube-proxy-mbcxc" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:40.062178   26835 pod_ready.go:82] duration metric: took 398.847996ms for pod "kube-proxy-mbcxc" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:40.062190   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:40.259249   26835 request.go:632] Waited for 196.997886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527
	I0915 07:04:40.259318   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527
	I0915 07:04:40.259325   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:40.259334   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:40.259344   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:40.262824   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:40.458929   26835 request.go:632] Waited for 195.362507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:40.459002   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527
	I0915 07:04:40.459009   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:40.459022   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:40.459032   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:40.462065   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:40.462606   26835 pod_ready.go:93] pod "kube-scheduler-ha-670527" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:40.462628   26835 pod_ready.go:82] duration metric: took 400.429796ms for pod "kube-scheduler-ha-670527" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:40.462639   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:40.658400   26835 request.go:632] Waited for 195.699406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527-m02
	I0915 07:04:40.658490   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527-m02
	I0915 07:04:40.658501   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:40.658512   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:40.658522   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:40.661704   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:40.858883   26835 request.go:632] Waited for 196.406536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:40.858936   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m02
	I0915 07:04:40.858941   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:40.858952   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:40.858957   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:40.862232   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:40.862827   26835 pod_ready.go:93] pod "kube-scheduler-ha-670527-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:40.862847   26835 pod_ready.go:82] duration metric: took 400.202103ms for pod "kube-scheduler-ha-670527-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:40.862857   26835 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:41.058719   26835 request.go:632] Waited for 195.785516ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527-m03
	I0915 07:04:41.058786   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-670527-m03
	I0915 07:04:41.058796   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:41.058808   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:41.058818   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:41.062547   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:41.258758   26835 request.go:632] Waited for 195.355688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:41.258808   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-670527-m03
	I0915 07:04:41.258813   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:41.258820   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:41.258825   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:41.262414   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:41.263220   26835 pod_ready.go:93] pod "kube-scheduler-ha-670527-m03" in "kube-system" namespace has status "Ready":"True"
	I0915 07:04:41.263240   26835 pod_ready.go:82] duration metric: took 400.375522ms for pod "kube-scheduler-ha-670527-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:04:41.263254   26835 pod_ready.go:39] duration metric: took 5.201426682s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:04:41.263272   26835 api_server.go:52] waiting for apiserver process to appear ...
	I0915 07:04:41.263335   26835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:04:41.285896   26835 api_server.go:72] duration metric: took 23.474016374s to wait for apiserver process to appear ...
	I0915 07:04:41.285926   26835 api_server.go:88] waiting for apiserver healthz status ...
	I0915 07:04:41.285950   26835 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0915 07:04:41.293498   26835 api_server.go:279] https://192.168.39.54:8443/healthz returned 200:
	ok
	I0915 07:04:41.293569   26835 round_trippers.go:463] GET https://192.168.39.54:8443/version
	I0915 07:04:41.293581   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:41.293591   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:41.293596   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:41.295108   26835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0915 07:04:41.295177   26835 api_server.go:141] control plane version: v1.31.1
	I0915 07:04:41.295192   26835 api_server.go:131] duration metric: took 9.260179ms to wait for apiserver health ...
	I0915 07:04:41.295199   26835 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 07:04:41.458590   26835 request.go:632] Waited for 163.32786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:04:41.458650   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:04:41.458655   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:41.458661   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:41.458665   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:41.464692   26835 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:04:41.471606   26835 system_pods.go:59] 24 kube-system pods found
	I0915 07:04:41.471631   26835 system_pods.go:61] "coredns-7c65d6cfc9-4w6x7" [b61b0aa7-48e9-4746-b2e9-d205b96fe557] Running
	I0915 07:04:41.471635   26835 system_pods.go:61] "coredns-7c65d6cfc9-lpj44" [a4a8f34c-c73f-411b-9773-18e274a3987f] Running
	I0915 07:04:41.471639   26835 system_pods.go:61] "etcd-ha-670527" [d7fd260a-bb00-4f30-8e27-ae79ab568428] Running
	I0915 07:04:41.471642   26835 system_pods.go:61] "etcd-ha-670527-m02" [91839d6d-2280-4850-bc47-0de42a8bd3ee] Running
	I0915 07:04:41.471646   26835 system_pods.go:61] "etcd-ha-670527-m03" [dfd469fc-8e59-49af-bc8e-6da438608405] Running
	I0915 07:04:41.471649   26835 system_pods.go:61] "kindnet-6sqhd" [8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2] Running
	I0915 07:04:41.471652   26835 system_pods.go:61] "kindnet-fcgbj" [39fe5d8d-e647-4133-80ba-24e9b4781c8e] Running
	I0915 07:04:41.471657   26835 system_pods.go:61] "kindnet-mn54b" [c413e4cd-9033-4f8d-ac98-5a641b14fe78] Running
	I0915 07:04:41.471659   26835 system_pods.go:61] "kube-apiserver-ha-670527" [2da91baa-de79-4304-9256-45771efa0825] Running
	I0915 07:04:41.471662   26835 system_pods.go:61] "kube-apiserver-ha-670527-m02" [406bb0a9-8e75-41c9-8f88-10d10b8fb327] Running
	I0915 07:04:41.471665   26835 system_pods.go:61] "kube-apiserver-ha-670527-m03" [e7ba2773-71e2-409f-82c7-c205f7126edd] Running
	I0915 07:04:41.471668   26835 system_pods.go:61] "kube-controller-manager-ha-670527" [aa981100-fd20-40e8-8449-b4332efc086d] Running
	I0915 07:04:41.471671   26835 system_pods.go:61] "kube-controller-manager-ha-670527-m02" [0e833c15-24c8-4a35-8c4e-58fe1eaa6600] Running
	I0915 07:04:41.471674   26835 system_pods.go:61] "kube-controller-manager-ha-670527-m03" [c260fc3a-bfcb-4457-9f92-6ddcd633d30d] Running
	I0915 07:04:41.471677   26835 system_pods.go:61] "kube-proxy-25xtk" [c9955046-49ba-426d-9377-8d3e02fd3f37] Running
	I0915 07:04:41.471680   26835 system_pods.go:61] "kube-proxy-kt79t" [9ae503da-976f-4f63-9a70-c1899bb990e7] Running
	I0915 07:04:41.471684   26835 system_pods.go:61] "kube-proxy-mbcxc" [bb5a9c97-bdc1-4346-b2cb-117e1e2d7fce] Running
	I0915 07:04:41.471689   26835 system_pods.go:61] "kube-scheduler-ha-670527" [085277d2-c1ce-4a47-9b73-47961e3d13d9] Running
	I0915 07:04:41.471692   26835 system_pods.go:61] "kube-scheduler-ha-670527-m02" [a88ee5e5-13cb-4160-b654-0af177d55cd5] Running
	I0915 07:04:41.471695   26835 system_pods.go:61] "kube-scheduler-ha-670527-m03" [d6ccae33-5434-4de4-a1d9-447fe01e5c54] Running
	I0915 07:04:41.471700   26835 system_pods.go:61] "kube-vip-ha-670527" [3ad87a12-7eca-44cb-8b2f-df38f92d8e4d] Running
	I0915 07:04:41.471703   26835 system_pods.go:61] "kube-vip-ha-670527-m02" [c02df8e9-056b-4028-9af5-1c4b8e42e780] Running
	I0915 07:04:41.471706   26835 system_pods.go:61] "kube-vip-ha-670527-m03" [c1cfdeee-1f16-4bdc-96a7-81e5863a9146] Running
	I0915 07:04:41.471708   26835 system_pods.go:61] "storage-provisioner" [62afc380-282c-4392-9ff9-7531ab5e74d1] Running
	I0915 07:04:41.471713   26835 system_pods.go:74] duration metric: took 176.510038ms to wait for pod list to return data ...
	I0915 07:04:41.471723   26835 default_sa.go:34] waiting for default service account to be created ...
	I0915 07:04:41.658983   26835 request.go:632] Waited for 187.197567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0915 07:04:41.659035   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0915 07:04:41.659040   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:41.659047   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:41.659051   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:41.664931   26835 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:04:41.665064   26835 default_sa.go:45] found service account: "default"
	I0915 07:04:41.665081   26835 default_sa.go:55] duration metric: took 193.352918ms for default service account to be created ...
	I0915 07:04:41.665089   26835 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 07:04:41.858799   26835 request.go:632] Waited for 193.620407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:04:41.858852   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0915 07:04:41.858857   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:41.858865   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:41.858869   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:41.865398   26835 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:04:41.873062   26835 system_pods.go:86] 24 kube-system pods found
	I0915 07:04:41.873110   26835 system_pods.go:89] "coredns-7c65d6cfc9-4w6x7" [b61b0aa7-48e9-4746-b2e9-d205b96fe557] Running
	I0915 07:04:41.873128   26835 system_pods.go:89] "coredns-7c65d6cfc9-lpj44" [a4a8f34c-c73f-411b-9773-18e274a3987f] Running
	I0915 07:04:41.873135   26835 system_pods.go:89] "etcd-ha-670527" [d7fd260a-bb00-4f30-8e27-ae79ab568428] Running
	I0915 07:04:41.873141   26835 system_pods.go:89] "etcd-ha-670527-m02" [91839d6d-2280-4850-bc47-0de42a8bd3ee] Running
	I0915 07:04:41.873146   26835 system_pods.go:89] "etcd-ha-670527-m03" [dfd469fc-8e59-49af-bc8e-6da438608405] Running
	I0915 07:04:41.873151   26835 system_pods.go:89] "kindnet-6sqhd" [8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2] Running
	I0915 07:04:41.873157   26835 system_pods.go:89] "kindnet-fcgbj" [39fe5d8d-e647-4133-80ba-24e9b4781c8e] Running
	I0915 07:04:41.873165   26835 system_pods.go:89] "kindnet-mn54b" [c413e4cd-9033-4f8d-ac98-5a641b14fe78] Running
	I0915 07:04:41.873172   26835 system_pods.go:89] "kube-apiserver-ha-670527" [2da91baa-de79-4304-9256-45771efa0825] Running
	I0915 07:04:41.873180   26835 system_pods.go:89] "kube-apiserver-ha-670527-m02" [406bb0a9-8e75-41c9-8f88-10d10b8fb327] Running
	I0915 07:04:41.873187   26835 system_pods.go:89] "kube-apiserver-ha-670527-m03" [e7ba2773-71e2-409f-82c7-c205f7126edd] Running
	I0915 07:04:41.873200   26835 system_pods.go:89] "kube-controller-manager-ha-670527" [aa981100-fd20-40e8-8449-b4332efc086d] Running
	I0915 07:04:41.873208   26835 system_pods.go:89] "kube-controller-manager-ha-670527-m02" [0e833c15-24c8-4a35-8c4e-58fe1eaa6600] Running
	I0915 07:04:41.873215   26835 system_pods.go:89] "kube-controller-manager-ha-670527-m03" [c260fc3a-bfcb-4457-9f92-6ddcd633d30d] Running
	I0915 07:04:41.873223   26835 system_pods.go:89] "kube-proxy-25xtk" [c9955046-49ba-426d-9377-8d3e02fd3f37] Running
	I0915 07:04:41.873227   26835 system_pods.go:89] "kube-proxy-kt79t" [9ae503da-976f-4f63-9a70-c1899bb990e7] Running
	I0915 07:04:41.873236   26835 system_pods.go:89] "kube-proxy-mbcxc" [bb5a9c97-bdc1-4346-b2cb-117e1e2d7fce] Running
	I0915 07:04:41.873242   26835 system_pods.go:89] "kube-scheduler-ha-670527" [085277d2-c1ce-4a47-9b73-47961e3d13d9] Running
	I0915 07:04:41.873251   26835 system_pods.go:89] "kube-scheduler-ha-670527-m02" [a88ee5e5-13cb-4160-b654-0af177d55cd5] Running
	I0915 07:04:41.873256   26835 system_pods.go:89] "kube-scheduler-ha-670527-m03" [d6ccae33-5434-4de4-a1d9-447fe01e5c54] Running
	I0915 07:04:41.873264   26835 system_pods.go:89] "kube-vip-ha-670527" [3ad87a12-7eca-44cb-8b2f-df38f92d8e4d] Running
	I0915 07:04:41.873269   26835 system_pods.go:89] "kube-vip-ha-670527-m02" [c02df8e9-056b-4028-9af5-1c4b8e42e780] Running
	I0915 07:04:41.873274   26835 system_pods.go:89] "kube-vip-ha-670527-m03" [c1cfdeee-1f16-4bdc-96a7-81e5863a9146] Running
	I0915 07:04:41.873282   26835 system_pods.go:89] "storage-provisioner" [62afc380-282c-4392-9ff9-7531ab5e74d1] Running
	I0915 07:04:41.873291   26835 system_pods.go:126] duration metric: took 208.195329ms to wait for k8s-apps to be running ...
	I0915 07:04:41.873303   26835 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 07:04:41.873353   26835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:04:41.893596   26835 system_svc.go:56] duration metric: took 20.281709ms WaitForService to wait for kubelet
	I0915 07:04:41.893634   26835 kubeadm.go:582] duration metric: took 24.081760048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:04:41.893657   26835 node_conditions.go:102] verifying NodePressure condition ...
	I0915 07:04:42.058985   26835 request.go:632] Waited for 165.250049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes
	I0915 07:04:42.059043   26835 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes
	I0915 07:04:42.059060   26835 round_trippers.go:469] Request Headers:
	I0915 07:04:42.059067   26835 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:04:42.059073   26835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0915 07:04:42.062924   26835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:04:42.063813   26835 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 07:04:42.063834   26835 node_conditions.go:123] node cpu capacity is 2
	I0915 07:04:42.063846   26835 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 07:04:42.063851   26835 node_conditions.go:123] node cpu capacity is 2
	I0915 07:04:42.063858   26835 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 07:04:42.063863   26835 node_conditions.go:123] node cpu capacity is 2
	I0915 07:04:42.063871   26835 node_conditions.go:105] duration metric: took 170.208899ms to run NodePressure ...
	I0915 07:04:42.063885   26835 start.go:241] waiting for startup goroutines ...
	I0915 07:04:42.063905   26835 start.go:255] writing updated cluster config ...
	I0915 07:04:42.064189   26835 ssh_runner.go:195] Run: rm -f paused
	I0915 07:04:42.117372   26835 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 07:04:42.119782   26835 out.go:177] * Done! kubectl is now configured to use "ha-670527" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.743790170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384167743769479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4e815ec-38ae-4975-bd44-6ba6bf463e29 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.744310117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2448272-828c-4410-a8fc-6087a4be16d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.744375247Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2448272-828c-4410-a8fc-6087a4be16d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.744658755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726383887395668421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740121317148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740060564989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606b9d6854130ea502e808cad6f11dd661e42dfaa3855c06ce4f49464137e7b5,PodSandboxId:d18c8f0f6f2f1b805b69f6ec62bf6c54531bf7d357002cde43172c70985937b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726383739985517431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17263837
27859378938,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726383727654078298,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5733f96a0b004d9b54b8a20e477ff1601388cce8a8dba41a6a696e613170f071,PodSandboxId:ad627ce8b936b4fceceb3a24712834305cfebc12fb66451a407039033c7a5687,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726383719461240461,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95538eea8eb32a40ca4ee9e8976fc434,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726383716294916549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcaf162e8fd08bf60d83f84f21cffe2d324498935459eb640d6286dfd00874cf,PodSandboxId:24baf0c9e05eeccd5ec56896a5c84a6bc3f29e1a9faa66abf955215c508b76a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726383716318590486,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e8e75a7001798077598504da25ac6847fd389b64abaec0551104db70b588b6,PodSandboxId:aa06f2e231607ae07276d54d579c7f3306415de82bdc7bb612fbde5a1f7a7cec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726383716275806788,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726383716223526552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2448272-828c-4410-a8fc-6087a4be16d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.787343284Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19bdab5f-c8ed-4408-8045-58304d21fa6c name=/runtime.v1.RuntimeService/Version
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.787702283Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19bdab5f-c8ed-4408-8045-58304d21fa6c name=/runtime.v1.RuntimeService/Version
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.789334558Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d12ebcd7-9700-4f20-a708-3d88dc87ffef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.789735408Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384167789714807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d12ebcd7-9700-4f20-a708-3d88dc87ffef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.795675536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b5fe73c-b14a-497b-ac66-1c45a33d0a5b name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.795746009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b5fe73c-b14a-497b-ac66-1c45a33d0a5b name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.795972053Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726383887395668421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740121317148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740060564989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606b9d6854130ea502e808cad6f11dd661e42dfaa3855c06ce4f49464137e7b5,PodSandboxId:d18c8f0f6f2f1b805b69f6ec62bf6c54531bf7d357002cde43172c70985937b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726383739985517431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17263837
27859378938,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726383727654078298,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5733f96a0b004d9b54b8a20e477ff1601388cce8a8dba41a6a696e613170f071,PodSandboxId:ad627ce8b936b4fceceb3a24712834305cfebc12fb66451a407039033c7a5687,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726383719461240461,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95538eea8eb32a40ca4ee9e8976fc434,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726383716294916549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcaf162e8fd08bf60d83f84f21cffe2d324498935459eb640d6286dfd00874cf,PodSandboxId:24baf0c9e05eeccd5ec56896a5c84a6bc3f29e1a9faa66abf955215c508b76a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726383716318590486,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e8e75a7001798077598504da25ac6847fd389b64abaec0551104db70b588b6,PodSandboxId:aa06f2e231607ae07276d54d579c7f3306415de82bdc7bb612fbde5a1f7a7cec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726383716275806788,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726383716223526552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b5fe73c-b14a-497b-ac66-1c45a33d0a5b name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.835812867Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19f1bf62-787c-4004-a776-96f6931c63dc name=/runtime.v1.RuntimeService/Version
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.835904910Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19f1bf62-787c-4004-a776-96f6931c63dc name=/runtime.v1.RuntimeService/Version
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.844361036Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f40ba54-bbf0-48a5-b948-7ebce3bb5f91 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.844795776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384167844770343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f40ba54-bbf0-48a5-b948-7ebce3bb5f91 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.845592253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac39a3b2-dc49-4ed6-86c2-88a5ccb492ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.845650113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac39a3b2-dc49-4ed6-86c2-88a5ccb492ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.846185037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726383887395668421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740121317148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740060564989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606b9d6854130ea502e808cad6f11dd661e42dfaa3855c06ce4f49464137e7b5,PodSandboxId:d18c8f0f6f2f1b805b69f6ec62bf6c54531bf7d357002cde43172c70985937b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726383739985517431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17263837
27859378938,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726383727654078298,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5733f96a0b004d9b54b8a20e477ff1601388cce8a8dba41a6a696e613170f071,PodSandboxId:ad627ce8b936b4fceceb3a24712834305cfebc12fb66451a407039033c7a5687,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726383719461240461,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95538eea8eb32a40ca4ee9e8976fc434,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726383716294916549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcaf162e8fd08bf60d83f84f21cffe2d324498935459eb640d6286dfd00874cf,PodSandboxId:24baf0c9e05eeccd5ec56896a5c84a6bc3f29e1a9faa66abf955215c508b76a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726383716318590486,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e8e75a7001798077598504da25ac6847fd389b64abaec0551104db70b588b6,PodSandboxId:aa06f2e231607ae07276d54d579c7f3306415de82bdc7bb612fbde5a1f7a7cec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726383716275806788,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726383716223526552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac39a3b2-dc49-4ed6-86c2-88a5ccb492ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.889406261Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ba68be3-65e4-4748-8a0d-bbbaa227adf3 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.889476916Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ba68be3-65e4-4748-8a0d-bbbaa227adf3 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.891547439Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f747bdf7-db8e-4683-a605-1bf94124362f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.891960513Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384167891936822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f747bdf7-db8e-4683-a605-1bf94124362f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.892471592Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=019e267b-3a32-44ef-aeff-6c0692bcd4dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.892526734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=019e267b-3a32-44ef-aeff-6c0692bcd4dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:09:27 ha-670527 crio[665]: time="2024-09-15 07:09:27.892754421Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726383887395668421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740121317148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726383740060564989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606b9d6854130ea502e808cad6f11dd661e42dfaa3855c06ce4f49464137e7b5,PodSandboxId:d18c8f0f6f2f1b805b69f6ec62bf6c54531bf7d357002cde43172c70985937b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726383739985517431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17263837
27859378938,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726383727654078298,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5733f96a0b004d9b54b8a20e477ff1601388cce8a8dba41a6a696e613170f071,PodSandboxId:ad627ce8b936b4fceceb3a24712834305cfebc12fb66451a407039033c7a5687,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726383719461240461,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95538eea8eb32a40ca4ee9e8976fc434,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726383716294916549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcaf162e8fd08bf60d83f84f21cffe2d324498935459eb640d6286dfd00874cf,PodSandboxId:24baf0c9e05eeccd5ec56896a5c84a6bc3f29e1a9faa66abf955215c508b76a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726383716318590486,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e8e75a7001798077598504da25ac6847fd389b64abaec0551104db70b588b6,PodSandboxId:aa06f2e231607ae07276d54d579c7f3306415de82bdc7bb612fbde5a1f7a7cec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726383716275806788,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726383716223526552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=019e267b-3a32-44ef-aeff-6c0692bcd4dd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1d6d31c8606ff       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   ee2d1970f1e78       busybox-7dff88458-rvbkj
	fde41666d8c29       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      7 minutes ago       Running             coredns                   0                   f7b4d1299c815       coredns-7c65d6cfc9-lpj44
	489cc4a0fb63e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      7 minutes ago       Running             coredns                   0                   6f3bebb3d80d8       coredns-7c65d6cfc9-4w6x7
	606b9d6854130       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   d18c8f0f6f2f1       storage-provisioner
	aa6d2372c6ae3       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      7 minutes ago       Running             kindnet-cni               0                   843991b56a260       kindnet-6sqhd
	b75dfe3b6121c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      7 minutes ago       Running             kube-proxy                0                   594c62a0375e6       kube-proxy-25xtk
	5733f96a0b004       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   ad627ce8b936b       kube-vip-ha-670527
	bcaf162e8fd08       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      7 minutes ago       Running             kube-controller-manager   0                   24baf0c9e05ee       kube-controller-manager-ha-670527
	bbb55bff5eb6c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   6e7b02c328479       etcd-ha-670527
	f3e8e75a70017       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      7 minutes ago       Running             kube-apiserver            0                   aa06f2e231607       kube-apiserver-ha-670527
	e3475f73ce55b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      7 minutes ago       Running             kube-scheduler            0                   58967292ecf37       kube-scheduler-ha-670527
	
	
	==> coredns [489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f] <==
	[INFO] 10.244.0.4:45235 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000089564s
	[INFO] 10.244.0.4:51521 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001979834s
	[INFO] 10.244.2.2:37125 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000203403s
	[INFO] 10.244.2.2:50161 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165008s
	[INFO] 10.244.2.2:56879 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.01294413s
	[INFO] 10.244.2.2:45083 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125708s
	[INFO] 10.244.1.2:52633 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197932s
	[INFO] 10.244.1.2:50573 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001770125s
	[INFO] 10.244.1.2:35701 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180854s
	[INFO] 10.244.1.2:41389 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132037s
	[INFO] 10.244.1.2:58202 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183842s
	[INFO] 10.244.1.2:49817 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109611s
	[INFO] 10.244.0.4:52793 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113159s
	[INFO] 10.244.0.4:38656 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106719s
	[INFO] 10.244.0.4:38122 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061858s
	[INFO] 10.244.2.2:46127 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114243s
	[INFO] 10.244.1.2:54602 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101327s
	[INFO] 10.244.1.2:55582 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124623s
	[INFO] 10.244.0.4:55917 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104871s
	[INFO] 10.244.2.2:41069 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001913s
	[INFO] 10.244.1.2:58958 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140612s
	[INFO] 10.244.1.2:39608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00015189s
	[INFO] 10.244.1.2:40627 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154411s
	[INFO] 10.244.0.4:53377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121746s
	[INFO] 10.244.0.4:52578 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089133s
	
	
	==> coredns [fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4] <==
	[INFO] 10.244.2.2:43257 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151941s
	[INFO] 10.244.2.2:33629 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003973916s
	[INFO] 10.244.2.2:33194 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170028s
	[INFO] 10.244.2.2:40376 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000180655s
	[INFO] 10.244.1.2:52585 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001449553s
	[INFO] 10.244.1.2:53060 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000208928s
	[INFO] 10.244.0.4:56755 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001727309s
	[INFO] 10.244.0.4:60825 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000234694s
	[INFO] 10.244.0.4:58873 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001046398s
	[INFO] 10.244.0.4:42322 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104256s
	[INFO] 10.244.0.4:34109 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000038552s
	[INFO] 10.244.2.2:60809 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124458s
	[INFO] 10.244.2.2:36825 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093407s
	[INFO] 10.244.2.2:56100 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075616s
	[INFO] 10.244.1.2:47124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122782s
	[INFO] 10.244.1.2:55965 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096943s
	[INFO] 10.244.0.4:34915 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120044s
	[INFO] 10.244.0.4:43696 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073334s
	[INFO] 10.244.0.4:59415 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158827s
	[INFO] 10.244.2.2:35148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177137s
	[INFO] 10.244.2.2:58466 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166358s
	[INFO] 10.244.2.2:60740 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000304437s
	[INFO] 10.244.1.2:54984 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149622s
	[INFO] 10.244.0.4:44476 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075065s
	[INFO] 10.244.0.4:37204 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054807s
	
	
	==> describe nodes <==
	Name:               ha-670527
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T07_02_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:02:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:09:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:05:05 +0000   Sun, 15 Sep 2024 07:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:05:05 +0000   Sun, 15 Sep 2024 07:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:05:05 +0000   Sun, 15 Sep 2024 07:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:05:05 +0000   Sun, 15 Sep 2024 07:02:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    ha-670527
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4352c21da1154e49b4f2cd8223ef4f22
	  System UUID:                4352c21d-a115-4e49-b4f2-cd8223ef4f22
	  Boot ID:                    28f13bdf-c0fc-4804-9eaa-c62790060557
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rvbkj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 coredns-7c65d6cfc9-4w6x7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m21s
	  kube-system                 coredns-7c65d6cfc9-lpj44             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m21s
	  kube-system                 etcd-ha-670527                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m27s
	  kube-system                 kindnet-6sqhd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m21s
	  kube-system                 kube-apiserver-ha-670527             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-controller-manager-ha-670527    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 kube-proxy-25xtk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m21s
	  kube-system                 kube-scheduler-ha-670527             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-vip-ha-670527                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m20s  kube-proxy       
	  Normal  Starting                 7m26s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m26s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m26s  kubelet          Node ha-670527 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m26s  kubelet          Node ha-670527 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m26s  kubelet          Node ha-670527 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m22s  node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	  Normal  NodeReady                7m9s   kubelet          Node ha-670527 status is now: NodeReady
	  Normal  RegisteredNode           6m25s  node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	  Normal  RegisteredNode           5m6s   node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	
	
	Name:               ha-670527-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_02_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:02:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:05:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 15 Sep 2024 07:04:56 +0000   Sun, 15 Sep 2024 07:06:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 15 Sep 2024 07:04:56 +0000   Sun, 15 Sep 2024 07:06:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 15 Sep 2024 07:04:56 +0000   Sun, 15 Sep 2024 07:06:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 15 Sep 2024 07:04:56 +0000   Sun, 15 Sep 2024 07:06:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-670527-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 937badb420fd46bab8c9040c7d7b213d
	  System UUID:                937badb4-20fd-46ba-b8c9-040c7d7b213d
	  Boot ID:                    12bb372b-3155-48ac-9bc2-c620b0e7b549
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gxwp9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 etcd-ha-670527-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m33s
	  kube-system                 kindnet-mn54b                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m34s
	  kube-system                 kube-apiserver-ha-670527-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-controller-manager-ha-670527-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-proxy-kt79t                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-scheduler-ha-670527-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-vip-ha-670527-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  Starting                 6m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m34s (x3 over 6m34s)  kubelet          Node ha-670527-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x3 over 6m34s)  kubelet          Node ha-670527-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x3 over 6m34s)  kubelet          Node ha-670527-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m32s                  node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  RegisteredNode           6m25s                  node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  NodeReady                6m11s                  kubelet          Node ha-670527-m02 status is now: NodeReady
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  NodeNotReady             2m47s                  node-controller  Node ha-670527-m02 status is now: NodeNotReady
	
	
	Name:               ha-670527-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_04_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:04:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:09:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:05:14 +0000   Sun, 15 Sep 2024 07:04:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:05:14 +0000   Sun, 15 Sep 2024 07:04:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:05:14 +0000   Sun, 15 Sep 2024 07:04:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:05:14 +0000   Sun, 15 Sep 2024 07:04:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-670527-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 16b4217ec868437981a046051de1bf49
	  System UUID:                16b4217e-c868-4379-81a0-46051de1bf49
	  Boot ID:                    8cbc44d2-ec4f-4f77-b000-fd28fe127c0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4cgxn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 etcd-ha-670527-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m14s
	  kube-system                 kindnet-fcgbj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m15s
	  kube-system                 kube-apiserver-ha-670527-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-controller-manager-ha-670527-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-proxy-mbcxc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-scheduler-ha-670527-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-vip-ha-670527-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m15s (x8 over 5m15s)  kubelet          Node ha-670527-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m15s (x8 over 5m15s)  kubelet          Node ha-670527-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m15s (x7 over 5m15s)  kubelet          Node ha-670527-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-670527-m03 event: Registered Node ha-670527-m03 in Controller
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-670527-m03 event: Registered Node ha-670527-m03 in Controller
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-670527-m03 event: Registered Node ha-670527-m03 in Controller
	
	
	Name:               ha-670527-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_05_24_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:05:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:09:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:05:53 +0000   Sun, 15 Sep 2024 07:05:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:05:53 +0000   Sun, 15 Sep 2024 07:05:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:05:53 +0000   Sun, 15 Sep 2024 07:05:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:05:53 +0000   Sun, 15 Sep 2024 07:05:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-670527-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 24d136e447f34c399b15050eaf7b094c
	  System UUID:                24d136e4-47f3-4c39-9b15-050eaf7b094c
	  Boot ID:                    3d95536b-6e73-40d6-9bd4-2fc71b1a73bc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4l8cf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m5s
	  kube-system                 kube-proxy-fq2lt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal  NodeHasSufficientMemory  4m5s (x2 over 4m6s)  kubelet          Node ha-670527-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x2 over 4m6s)  kubelet          Node ha-670527-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x2 over 4m6s)  kubelet          Node ha-670527-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal  NodeReady                3m45s                kubelet          Node ha-670527-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep15 07:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050166] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041549] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.806616] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.438146] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.580292] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.438155] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.055236] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053736] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.163785] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.149779] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.293443] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.937975] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.762754] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.062847] kauditd_printk_skb: 158 callbacks suppressed
	[Sep15 07:02] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.107092] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.313948] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.323190] kauditd_printk_skb: 38 callbacks suppressed
	[Sep15 07:03] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b] <==
	{"level":"warn","ts":"2024-09-15T07:09:28.190342Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.194669Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.205287Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.210652Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.213271Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.221429Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.226938Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.231468Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.242397Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.250275Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.257728Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.261835Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.264877Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.265869Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.272024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.278857Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.287334Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.299436Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.308413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.319403Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.333414Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.349390Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.365447Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.388652Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:09:28.393843Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 07:09:28 up 8 min,  0 users,  load average: 0.45, 0.26, 0.14
	Linux ha-670527 5.10.207 #1 SMP Sun Sep 15 04:48:27 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230] <==
	I0915 07:08:49.128671       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:08:59.124320       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:08:59.124376       1 main.go:299] handling current node
	I0915 07:08:59.124395       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:08:59.124402       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:08:59.124576       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:08:59.124610       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:08:59.124696       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:08:59.124705       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:09:09.120116       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:09:09.120287       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:09:09.120446       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:09:09.120472       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:09:09.120533       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:09:09.120558       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:09:09.120614       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:09:09.120633       1 main.go:299] handling current node
	I0915 07:09:19.128338       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:09:19.128443       1 main.go:299] handling current node
	I0915 07:09:19.128471       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:09:19.128489       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:09:19.128639       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:09:19.128673       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:09:19.128730       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:09:19.128747       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f3e8e75a7001798077598504da25ac6847fd389b64abaec0551104db70b588b6] <==
	I0915 07:02:01.340494       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0915 07:02:02.506665       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0915 07:02:02.519606       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0915 07:02:02.541694       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0915 07:02:06.741032       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0915 07:02:07.100027       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0915 07:04:49.287010       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51456: use of closed network connection
	E0915 07:04:49.478785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51488: use of closed network connection
	E0915 07:04:49.665458       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51508: use of closed network connection
	E0915 07:04:49.869558       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51518: use of closed network connection
	E0915 07:04:50.058020       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51528: use of closed network connection
	E0915 07:04:50.244603       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51540: use of closed network connection
	E0915 07:04:50.418408       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51552: use of closed network connection
	E0915 07:04:50.596359       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51574: use of closed network connection
	E0915 07:04:50.783116       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51590: use of closed network connection
	E0915 07:04:51.068899       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51614: use of closed network connection
	E0915 07:04:51.440856       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51656: use of closed network connection
	E0915 07:04:51.623756       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51666: use of closed network connection
	E0915 07:04:51.810349       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51690: use of closed network connection
	E0915 07:04:52.035620       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51714: use of closed network connection
	E0915 07:05:23.709346       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0915 07:05:23.711222       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0915 07:05:23.712625       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0915 07:05:23.713864       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0915 07:05:23.715392       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.748303ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-670527-m04" result=null
	
	
	==> kube-controller-manager [bcaf162e8fd08bf60d83f84f21cffe2d324498935459eb640d6286dfd00874cf] <==
	I0915 07:05:23.158787       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-670527-m04" podCIDRs=["10.244.3.0/24"]
	I0915 07:05:23.158856       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:23.158884       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:23.167230       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:24.122231       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:24.159118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:24.552959       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:26.307659       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:26.308934       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-670527-m04"
	I0915 07:05:26.365352       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:27.733420       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:27.844709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:33.193577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:43.135820       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:43.136569       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-670527-m04"
	I0915 07:05:43.155183       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:43.653863       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:05:53.513402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:06:41.342481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m02"
	I0915 07:06:41.342958       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-670527-m04"
	I0915 07:06:41.362298       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m02"
	I0915 07:06:41.510345       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.028922ms"
	I0915 07:06:41.510751       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="127.963µs"
	I0915 07:06:42.823669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m02"
	I0915 07:06:46.588018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m02"
	
	
	==> kube-proxy [b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 07:02:08.079904       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 07:02:08.096851       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.54"]
	E0915 07:02:08.096998       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 07:02:08.138602       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 07:02:08.138741       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 07:02:08.138784       1 server_linux.go:169] "Using iptables Proxier"
	I0915 07:02:08.143584       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 07:02:08.144421       1 server.go:483] "Version info" version="v1.31.1"
	I0915 07:02:08.144550       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:02:08.147853       1 config.go:199] "Starting service config controller"
	I0915 07:02:08.148197       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 07:02:08.148411       1 config.go:105] "Starting endpoint slice config controller"
	I0915 07:02:08.148448       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 07:02:08.149835       1 config.go:328] "Starting node config controller"
	I0915 07:02:08.152553       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 07:02:08.249198       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 07:02:08.249264       1 shared_informer.go:320] Caches are synced for service config
	I0915 07:02:08.255066       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d] <==
	W0915 07:02:00.534536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 07:02:00.534589       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:02:00.597420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 07:02:00.597899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:02:00.604546       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 07:02:00.604596       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0915 07:02:00.622867       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 07:02:00.622918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:02:00.665454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 07:02:00.665506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:02:00.745175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 07:02:00.745309       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 07:02:00.757869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 07:02:00.757923       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 07:02:00.771782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 07:02:00.771947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0915 07:02:03.481353       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0915 07:04:42.984344       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gxwp9\": pod busybox-7dff88458-gxwp9 is already assigned to node \"ha-670527-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-gxwp9" node="ha-670527-m02"
	E0915 07:04:42.984530       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5fc959e1-a77e-415a-bbea-3dd4303e82d9(default/busybox-7dff88458-gxwp9) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-gxwp9"
	E0915 07:04:42.984580       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gxwp9\": pod busybox-7dff88458-gxwp9 is already assigned to node \"ha-670527-m02\"" pod="default/busybox-7dff88458-gxwp9"
	I0915 07:04:42.984652       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-gxwp9" node="ha-670527-m02"
	E0915 07:05:23.207787       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fq2lt\": pod kube-proxy-fq2lt is already assigned to node \"ha-670527-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fq2lt" node="ha-670527-m04"
	E0915 07:05:23.207903       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 50b6a6aa-70b7-41b5-9554-5fef223d25a4(kube-system/kube-proxy-fq2lt) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fq2lt"
	E0915 07:05:23.207927       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fq2lt\": pod kube-proxy-fq2lt is already assigned to node \"ha-670527-m04\"" pod="kube-system/kube-proxy-fq2lt"
	I0915 07:05:23.207964       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fq2lt" node="ha-670527-m04"
	
	
	==> kubelet <==
	Sep 15 07:08:02 ha-670527 kubelet[1303]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 15 07:08:02 ha-670527 kubelet[1303]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 15 07:08:02 ha-670527 kubelet[1303]: E0915 07:08:02.631878    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384082631207727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:02 ha-670527 kubelet[1303]: E0915 07:08:02.631927    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384082631207727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:12 ha-670527 kubelet[1303]: E0915 07:08:12.632966    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384092632722443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:12 ha-670527 kubelet[1303]: E0915 07:08:12.633008    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384092632722443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:22 ha-670527 kubelet[1303]: E0915 07:08:22.635672    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384102634358572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:22 ha-670527 kubelet[1303]: E0915 07:08:22.636618    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384102634358572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:32 ha-670527 kubelet[1303]: E0915 07:08:32.638675    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384112638248246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:32 ha-670527 kubelet[1303]: E0915 07:08:32.638709    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384112638248246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:42 ha-670527 kubelet[1303]: E0915 07:08:42.640265    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384122639978935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:42 ha-670527 kubelet[1303]: E0915 07:08:42.640305    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384122639978935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:52 ha-670527 kubelet[1303]: E0915 07:08:52.642720    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384132641764331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:08:52 ha-670527 kubelet[1303]: E0915 07:08:52.642782    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384132641764331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:09:02 ha-670527 kubelet[1303]: E0915 07:09:02.488666    1303 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 15 07:09:02 ha-670527 kubelet[1303]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 15 07:09:02 ha-670527 kubelet[1303]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 15 07:09:02 ha-670527 kubelet[1303]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 15 07:09:02 ha-670527 kubelet[1303]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 15 07:09:02 ha-670527 kubelet[1303]: E0915 07:09:02.649648    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384142649112463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:09:02 ha-670527 kubelet[1303]: E0915 07:09:02.649700    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384142649112463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:09:12 ha-670527 kubelet[1303]: E0915 07:09:12.651996    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384152651640987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:09:12 ha-670527 kubelet[1303]: E0915 07:09:12.652333    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384152651640987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:09:22 ha-670527 kubelet[1303]: E0915 07:09:22.659519    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384162655340936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:09:22 ha-670527 kubelet[1303]: E0915 07:09:22.659567    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384162655340936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-670527 -n ha-670527
helpers_test.go:261: (dbg) Run:  kubectl --context ha-670527 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (61.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (358.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-670527 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-670527 -v=7 --alsologtostderr
E0915 07:11:02.684454   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:11:30.386035   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-670527 -v=7 --alsologtostderr: exit status 82 (2m1.881210015s)

                                                
                                                
-- stdout --
	* Stopping node "ha-670527-m04"  ...
	* Stopping node "ha-670527-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:09:29.824096   32615 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:09:29.824379   32615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:09:29.824388   32615 out.go:358] Setting ErrFile to fd 2...
	I0915 07:09:29.824395   32615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:09:29.824637   32615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:09:29.824937   32615 out.go:352] Setting JSON to false
	I0915 07:09:29.825050   32615 mustload.go:65] Loading cluster: ha-670527
	I0915 07:09:29.825439   32615 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:09:29.825549   32615 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:09:29.825741   32615 mustload.go:65] Loading cluster: ha-670527
	I0915 07:09:29.825925   32615 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:09:29.825961   32615 stop.go:39] StopHost: ha-670527-m04
	I0915 07:09:29.826324   32615 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:29.826367   32615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:29.842857   32615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35777
	I0915 07:09:29.843379   32615 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:29.844054   32615 main.go:141] libmachine: Using API Version  1
	I0915 07:09:29.844086   32615 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:29.844468   32615 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:29.847171   32615 out.go:177] * Stopping node "ha-670527-m04"  ...
	I0915 07:09:29.848930   32615 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0915 07:09:29.848972   32615 main.go:141] libmachine: (ha-670527-m04) Calling .DriverName
	I0915 07:09:29.849212   32615 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0915 07:09:29.849234   32615 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHHostname
	I0915 07:09:29.852308   32615 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:09:29.852787   32615 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:05:07 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:09:29.852815   32615 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:09:29.852930   32615 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHPort
	I0915 07:09:29.853083   32615 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHKeyPath
	I0915 07:09:29.853260   32615 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHUsername
	I0915 07:09:29.853404   32615 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m04/id_rsa Username:docker}
	I0915 07:09:29.940671   32615 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0915 07:09:29.994674   32615 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0915 07:09:30.049013   32615 main.go:141] libmachine: Stopping "ha-670527-m04"...
	I0915 07:09:30.049039   32615 main.go:141] libmachine: (ha-670527-m04) Calling .GetState
	I0915 07:09:30.050600   32615 main.go:141] libmachine: (ha-670527-m04) Calling .Stop
	I0915 07:09:30.053781   32615 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 0/120
	I0915 07:09:31.242064   32615 main.go:141] libmachine: (ha-670527-m04) Calling .GetState
	I0915 07:09:31.243331   32615 main.go:141] libmachine: Machine "ha-670527-m04" was stopped.
	I0915 07:09:31.243348   32615 stop.go:75] duration metric: took 1.394422472s to stop
	I0915 07:09:31.243366   32615 stop.go:39] StopHost: ha-670527-m03
	I0915 07:09:31.243660   32615 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:09:31.243709   32615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:09:31.258696   32615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39969
	I0915 07:09:31.259106   32615 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:09:31.259606   32615 main.go:141] libmachine: Using API Version  1
	I0915 07:09:31.259629   32615 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:09:31.259929   32615 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:09:31.262081   32615 out.go:177] * Stopping node "ha-670527-m03"  ...
	I0915 07:09:31.263322   32615 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0915 07:09:31.263346   32615 main.go:141] libmachine: (ha-670527-m03) Calling .DriverName
	I0915 07:09:31.263543   32615 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0915 07:09:31.263563   32615 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHHostname
	I0915 07:09:31.266606   32615 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:09:31.266996   32615 main.go:141] libmachine: (ha-670527-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8f:a3", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:03:36 +0000 UTC Type:0 Mac:52:54:00:b4:8f:a3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-670527-m03 Clientid:01:52:54:00:b4:8f:a3}
	I0915 07:09:31.267026   32615 main.go:141] libmachine: (ha-670527-m03) DBG | domain ha-670527-m03 has defined IP address 192.168.39.4 and MAC address 52:54:00:b4:8f:a3 in network mk-ha-670527
	I0915 07:09:31.267145   32615 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHPort
	I0915 07:09:31.267288   32615 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHKeyPath
	I0915 07:09:31.267449   32615 main.go:141] libmachine: (ha-670527-m03) Calling .GetSSHUsername
	I0915 07:09:31.267581   32615 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m03/id_rsa Username:docker}
	I0915 07:09:31.353183   32615 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0915 07:09:31.407396   32615 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0915 07:09:31.462663   32615 main.go:141] libmachine: Stopping "ha-670527-m03"...
	I0915 07:09:31.462687   32615 main.go:141] libmachine: (ha-670527-m03) Calling .GetState
	I0915 07:09:31.464204   32615 main.go:141] libmachine: (ha-670527-m03) Calling .Stop
	I0915 07:09:31.467456   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 0/120
	I0915 07:09:32.469139   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 1/120
	I0915 07:09:33.470483   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 2/120
	I0915 07:09:34.471854   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 3/120
	I0915 07:09:35.473183   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 4/120
	I0915 07:09:36.475043   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 5/120
	I0915 07:09:37.476368   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 6/120
	I0915 07:09:38.477884   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 7/120
	I0915 07:09:39.479325   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 8/120
	I0915 07:09:40.480681   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 9/120
	I0915 07:09:41.482828   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 10/120
	I0915 07:09:42.484433   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 11/120
	I0915 07:09:43.485965   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 12/120
	I0915 07:09:44.487642   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 13/120
	I0915 07:09:45.489038   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 14/120
	I0915 07:09:46.490965   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 15/120
	I0915 07:09:47.493025   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 16/120
	I0915 07:09:48.494563   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 17/120
	I0915 07:09:49.496066   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 18/120
	I0915 07:09:50.497840   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 19/120
	I0915 07:09:51.500144   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 20/120
	I0915 07:09:52.501334   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 21/120
	I0915 07:09:53.502824   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 22/120
	I0915 07:09:54.504278   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 23/120
	I0915 07:09:55.505959   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 24/120
	I0915 07:09:56.507822   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 25/120
	I0915 07:09:57.509639   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 26/120
	I0915 07:09:58.511267   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 27/120
	I0915 07:09:59.513778   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 28/120
	I0915 07:10:00.515274   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 29/120
	I0915 07:10:01.517230   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 30/120
	I0915 07:10:02.518743   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 31/120
	I0915 07:10:03.520245   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 32/120
	I0915 07:10:04.521484   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 33/120
	I0915 07:10:05.522845   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 34/120
	I0915 07:10:06.525189   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 35/120
	I0915 07:10:07.526599   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 36/120
	I0915 07:10:08.527942   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 37/120
	I0915 07:10:09.529278   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 38/120
	I0915 07:10:10.530563   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 39/120
	I0915 07:10:11.532348   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 40/120
	I0915 07:10:12.533576   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 41/120
	I0915 07:10:13.535055   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 42/120
	I0915 07:10:14.536448   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 43/120
	I0915 07:10:15.537919   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 44/120
	I0915 07:10:16.539326   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 45/120
	I0915 07:10:17.540643   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 46/120
	I0915 07:10:18.542007   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 47/120
	I0915 07:10:19.544228   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 48/120
	I0915 07:10:20.545684   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 49/120
	I0915 07:10:21.547549   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 50/120
	I0915 07:10:22.548972   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 51/120
	I0915 07:10:23.550428   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 52/120
	I0915 07:10:24.552073   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 53/120
	I0915 07:10:25.553390   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 54/120
	I0915 07:10:26.555306   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 55/120
	I0915 07:10:27.556760   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 56/120
	I0915 07:10:28.558250   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 57/120
	I0915 07:10:29.559672   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 58/120
	I0915 07:10:30.561333   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 59/120
	I0915 07:10:31.562932   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 60/120
	I0915 07:10:32.564294   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 61/120
	I0915 07:10:33.565994   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 62/120
	I0915 07:10:34.568338   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 63/120
	I0915 07:10:35.569822   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 64/120
	I0915 07:10:36.571536   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 65/120
	I0915 07:10:37.572856   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 66/120
	I0915 07:10:38.574161   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 67/120
	I0915 07:10:39.576166   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 68/120
	I0915 07:10:40.577519   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 69/120
	I0915 07:10:41.579309   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 70/120
	I0915 07:10:42.580502   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 71/120
	I0915 07:10:43.581885   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 72/120
	I0915 07:10:44.583109   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 73/120
	I0915 07:10:45.584432   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 74/120
	I0915 07:10:46.586231   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 75/120
	I0915 07:10:47.588579   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 76/120
	I0915 07:10:48.589968   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 77/120
	I0915 07:10:49.591370   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 78/120
	I0915 07:10:50.592825   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 79/120
	I0915 07:10:51.594472   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 80/120
	I0915 07:10:52.595719   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 81/120
	I0915 07:10:53.597166   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 82/120
	I0915 07:10:54.598484   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 83/120
	I0915 07:10:55.599897   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 84/120
	I0915 07:10:56.601762   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 85/120
	I0915 07:10:57.603110   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 86/120
	I0915 07:10:58.604215   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 87/120
	I0915 07:10:59.605681   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 88/120
	I0915 07:11:00.606990   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 89/120
	I0915 07:11:01.608689   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 90/120
	I0915 07:11:02.609984   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 91/120
	I0915 07:11:03.611534   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 92/120
	I0915 07:11:04.612871   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 93/120
	I0915 07:11:05.614215   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 94/120
	I0915 07:11:06.616220   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 95/120
	I0915 07:11:07.617515   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 96/120
	I0915 07:11:08.618835   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 97/120
	I0915 07:11:09.620479   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 98/120
	I0915 07:11:10.621952   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 99/120
	I0915 07:11:11.623676   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 100/120
	I0915 07:11:12.625074   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 101/120
	I0915 07:11:13.626999   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 102/120
	I0915 07:11:14.628256   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 103/120
	I0915 07:11:15.629554   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 104/120
	I0915 07:11:16.631121   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 105/120
	I0915 07:11:17.632501   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 106/120
	I0915 07:11:18.633694   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 107/120
	I0915 07:11:19.635810   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 108/120
	I0915 07:11:20.638030   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 109/120
	I0915 07:11:21.640413   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 110/120
	I0915 07:11:22.641885   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 111/120
	I0915 07:11:23.643197   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 112/120
	I0915 07:11:24.644538   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 113/120
	I0915 07:11:25.645926   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 114/120
	I0915 07:11:26.647524   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 115/120
	I0915 07:11:27.648907   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 116/120
	I0915 07:11:28.650672   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 117/120
	I0915 07:11:29.652222   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 118/120
	I0915 07:11:30.653574   32615 main.go:141] libmachine: (ha-670527-m03) Waiting for machine to stop 119/120
	I0915 07:11:31.654987   32615 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0915 07:11:31.655043   32615 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0915 07:11:31.657257   32615 out.go:201] 
	W0915 07:11:31.658662   32615 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0915 07:11:31.658681   32615 out.go:270] * 
	* 
	W0915 07:11:31.660992   32615 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 07:11:31.662340   32615 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-670527 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-670527 --wait=true -v=7 --alsologtostderr
E0915 07:12:56.199040   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-670527 --wait=true -v=7 --alsologtostderr: (3m53.667895252s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-670527
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-670527 -n ha-670527
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-670527 logs -n 25: (2.104261161s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-670527 cp ha-670527-m03:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m02:/home/docker/cp-test_ha-670527-m03_ha-670527-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m02 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | /home/docker/cp-test_ha-670527-m03_ha-670527-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m03:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m04:/home/docker/cp-test_ha-670527-m03_ha-670527-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m04 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | /home/docker/cp-test_ha-670527-m03_ha-670527-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-670527 cp testdata/cp-test.txt                                                | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2302607583/001/cp-test_ha-670527-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527:/home/docker/cp-test_ha-670527-m04_ha-670527.txt                       |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527 sudo cat                                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-670527-m04_ha-670527.txt                                 |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m02:/home/docker/cp-test_ha-670527-m04_ha-670527-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m02 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-670527-m04_ha-670527-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m03:/home/docker/cp-test_ha-670527-m04_ha-670527-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m03 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-670527-m04_ha-670527-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-670527 node stop m02 -v=7                                                     | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-670527 node start m02 -v=7                                                    | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-670527 -v=7                                                           | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:09 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-670527 -v=7                                                                | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:09 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-670527 --wait=true -v=7                                                    | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:11 UTC | 15 Sep 24 07:15 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-670527                                                                | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:15 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 07:11:31
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 07:11:31.706810   33084 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:11:31.706924   33084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:11:31.706935   33084 out.go:358] Setting ErrFile to fd 2...
	I0915 07:11:31.706939   33084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:11:31.707158   33084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:11:31.707748   33084 out.go:352] Setting JSON to false
	I0915 07:11:31.708686   33084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3238,"bootTime":1726381054,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 07:11:31.708786   33084 start.go:139] virtualization: kvm guest
	I0915 07:11:31.711320   33084 out.go:177] * [ha-670527] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 07:11:31.712733   33084 notify.go:220] Checking for updates...
	I0915 07:11:31.712750   33084 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:11:31.714227   33084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:11:31.715816   33084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:11:31.717238   33084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:11:31.718412   33084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 07:11:31.719906   33084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:11:31.721887   33084 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:11:31.722016   33084 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:11:31.722518   33084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:11:31.722563   33084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:11:31.739353   33084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43097
	I0915 07:11:31.739829   33084 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:11:31.740400   33084 main.go:141] libmachine: Using API Version  1
	I0915 07:11:31.740424   33084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:11:31.740742   33084 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:11:31.740918   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:11:31.777801   33084 out.go:177] * Using the kvm2 driver based on existing profile
	I0915 07:11:31.779445   33084 start.go:297] selected driver: kvm2
	I0915 07:11:31.779458   33084 start.go:901] validating driver "kvm2" against &{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:11:31.779612   33084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:11:31.779918   33084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:11:31.780012   33084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 07:11:31.794937   33084 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 07:11:31.795732   33084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:11:31.795772   33084 cni.go:84] Creating CNI manager for ""
	I0915 07:11:31.795832   33084 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0915 07:11:31.795907   33084 start.go:340] cluster config:
	{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:11:31.796042   33084 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:11:31.797881   33084 out.go:177] * Starting "ha-670527" primary control-plane node in "ha-670527" cluster
	I0915 07:11:31.799222   33084 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:11:31.799264   33084 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 07:11:31.799282   33084 cache.go:56] Caching tarball of preloaded images
	I0915 07:11:31.799391   33084 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:11:31.799406   33084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:11:31.799512   33084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:11:31.799729   33084 start.go:360] acquireMachinesLock for ha-670527: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:11:31.799767   33084 start.go:364] duration metric: took 22.439µs to acquireMachinesLock for "ha-670527"
	I0915 07:11:31.799789   33084 start.go:96] Skipping create...Using existing machine configuration
	I0915 07:11:31.799799   33084 fix.go:54] fixHost starting: 
	I0915 07:11:31.800037   33084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:11:31.800107   33084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:11:31.814212   33084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38909
	I0915 07:11:31.814617   33084 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:11:31.815084   33084 main.go:141] libmachine: Using API Version  1
	I0915 07:11:31.815107   33084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:11:31.815447   33084 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:11:31.815674   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:11:31.815843   33084 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:11:31.817793   33084 fix.go:112] recreateIfNeeded on ha-670527: state=Running err=<nil>
	W0915 07:11:31.817847   33084 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 07:11:31.820089   33084 out.go:177] * Updating the running kvm2 "ha-670527" VM ...
	I0915 07:11:31.821448   33084 machine.go:93] provisionDockerMachine start ...
	I0915 07:11:31.821477   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:11:31.821719   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:11:31.824223   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:31.824627   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:11:31.824646   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:31.824824   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:11:31.824992   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:31.825121   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:31.825266   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:11:31.825410   33084 main.go:141] libmachine: Using SSH client type: native
	I0915 07:11:31.825591   33084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:11:31.825600   33084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 07:11:31.939524   33084 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-670527
	
	I0915 07:11:31.939551   33084 main.go:141] libmachine: (ha-670527) Calling .GetMachineName
	I0915 07:11:31.939768   33084 buildroot.go:166] provisioning hostname "ha-670527"
	I0915 07:11:31.939796   33084 main.go:141] libmachine: (ha-670527) Calling .GetMachineName
	I0915 07:11:31.939993   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:11:31.942790   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:31.943157   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:11:31.943192   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:31.943252   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:11:31.943413   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:31.943547   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:31.943713   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:11:31.943859   33084 main.go:141] libmachine: Using SSH client type: native
	I0915 07:11:31.944040   33084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:11:31.944055   33084 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-670527 && echo "ha-670527" | sudo tee /etc/hostname
	I0915 07:11:32.070709   33084 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-670527
	
	I0915 07:11:32.070736   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:11:32.073646   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.074092   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:11:32.074127   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.074317   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:11:32.074484   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:32.074618   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:32.074709   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:11:32.074884   33084 main.go:141] libmachine: Using SSH client type: native
	I0915 07:11:32.075093   33084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:11:32.075109   33084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-670527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-670527/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-670527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:11:32.186624   33084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:11:32.186653   33084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 07:11:32.186683   33084 buildroot.go:174] setting up certificates
	I0915 07:11:32.186695   33084 provision.go:84] configureAuth start
	I0915 07:11:32.186712   33084 main.go:141] libmachine: (ha-670527) Calling .GetMachineName
	I0915 07:11:32.186977   33084 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:11:32.189932   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.190348   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:11:32.190368   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.190491   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:11:32.192688   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.193176   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:11:32.193221   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.193360   33084 provision.go:143] copyHostCerts
	I0915 07:11:32.193391   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:11:32.193444   33084 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 07:11:32.193455   33084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:11:32.193534   33084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 07:11:32.193652   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:11:32.193676   33084 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 07:11:32.193683   33084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:11:32.193727   33084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 07:11:32.193802   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:11:32.193842   33084 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 07:11:32.193849   33084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:11:32.193886   33084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 07:11:32.193961   33084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.ha-670527 san=[127.0.0.1 192.168.39.54 ha-670527 localhost minikube]
	I0915 07:11:32.267105   33084 provision.go:177] copyRemoteCerts
	I0915 07:11:32.267162   33084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:11:32.267197   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:11:32.269655   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.269973   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:11:32.269993   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.270173   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:11:32.270357   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:32.270506   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:11:32.270623   33084 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:11:32.357380   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0915 07:11:32.357445   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0915 07:11:32.384500   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0915 07:11:32.384581   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 07:11:32.412234   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0915 07:11:32.412296   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:11:32.438757   33084 provision.go:87] duration metric: took 252.046876ms to configureAuth
	I0915 07:11:32.438800   33084 buildroot.go:189] setting minikube options for container-runtime
	I0915 07:11:32.439035   33084 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:11:32.439120   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:11:32.441659   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.442060   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:11:32.442096   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.442236   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:11:32.442410   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:32.442595   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:32.442767   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:11:32.442919   33084 main.go:141] libmachine: Using SSH client type: native
	I0915 07:11:32.443083   33084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:11:32.443099   33084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:13:03.392164   33084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:13:03.392192   33084 machine.go:96] duration metric: took 1m31.570726721s to provisionDockerMachine
	I0915 07:13:03.392206   33084 start.go:293] postStartSetup for "ha-670527" (driver="kvm2")
	I0915 07:13:03.392221   33084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:13:03.392239   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:13:03.392512   33084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:13:03.392541   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:13:03.395645   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.396127   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:13:03.396152   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.396268   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:13:03.396453   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:13:03.396590   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:13:03.396740   33084 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:13:03.483366   33084 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:13:03.487615   33084 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 07:13:03.487635   33084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 07:13:03.487697   33084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 07:13:03.487764   33084 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 07:13:03.487773   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /etc/ssl/certs/131902.pem
	I0915 07:13:03.487850   33084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:13:03.498784   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:13:03.525279   33084 start.go:296] duration metric: took 133.058231ms for postStartSetup
	I0915 07:13:03.525317   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:13:03.525584   33084 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0915 07:13:03.525611   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:13:03.528215   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.528598   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:13:03.528623   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.528792   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:13:03.528953   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:13:03.529090   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:13:03.529184   33084 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	W0915 07:13:03.612230   33084 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0915 07:13:03.612259   33084 fix.go:56] duration metric: took 1m31.812453347s for fixHost
	I0915 07:13:03.612279   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:13:03.614686   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.615032   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:13:03.615056   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.615142   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:13:03.615320   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:13:03.615467   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:13:03.615579   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:13:03.615723   33084 main.go:141] libmachine: Using SSH client type: native
	I0915 07:13:03.615942   33084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:13:03.615957   33084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 07:13:03.722898   33084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726384383.689004182
	
	I0915 07:13:03.722921   33084 fix.go:216] guest clock: 1726384383.689004182
	I0915 07:13:03.722928   33084 fix.go:229] Guest: 2024-09-15 07:13:03.689004182 +0000 UTC Remote: 2024-09-15 07:13:03.612265191 +0000 UTC m=+91.940511290 (delta=76.738991ms)
	I0915 07:13:03.722946   33084 fix.go:200] guest clock delta is within tolerance: 76.738991ms
	I0915 07:13:03.722950   33084 start.go:83] releasing machines lock for "ha-670527", held for 1m31.923174723s
	I0915 07:13:03.722966   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:13:03.723206   33084 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:13:03.725749   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.726120   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:13:03.726146   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.726295   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:13:03.726865   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:13:03.727002   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:13:03.727116   33084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:13:03.727174   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:13:03.727203   33084 ssh_runner.go:195] Run: cat /version.json
	I0915 07:13:03.727222   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:13:03.729720   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.729925   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.730114   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:13:03.730136   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.730271   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:13:03.730408   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:13:03.730412   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:13:03.730428   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.730571   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:13:03.730600   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:13:03.730729   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:13:03.730725   33084 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:13:03.730881   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:13:03.731004   33084 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:13:03.811066   33084 ssh_runner.go:195] Run: systemctl --version
	I0915 07:13:03.836783   33084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:13:04.000559   33084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 07:13:04.006800   33084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 07:13:04.006866   33084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:13:04.016075   33084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0915 07:13:04.016097   33084 start.go:495] detecting cgroup driver to use...
	I0915 07:13:04.016161   33084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:13:04.032919   33084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:13:04.046899   33084 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:13:04.046986   33084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:13:04.060918   33084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:13:04.074522   33084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:13:04.226393   33084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:13:04.373125   33084 docker.go:233] disabling docker service ...
	I0915 07:13:04.373198   33084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:13:04.389408   33084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:13:04.402547   33084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:13:04.546824   33084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:13:04.693119   33084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:13:04.707066   33084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:13:04.726435   33084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:13:04.726504   33084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:04.737343   33084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:13:04.737420   33084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:04.747738   33084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:04.757880   33084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:04.768174   33084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:13:04.779364   33084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:04.789706   33084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:04.800722   33084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:04.811010   33084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:13:04.820531   33084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:13:04.830431   33084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:13:04.976537   33084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:13:05.210348   33084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:13:05.210439   33084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:13:05.218528   33084 start.go:563] Will wait 60s for crictl version
	I0915 07:13:05.218577   33084 ssh_runner.go:195] Run: which crictl
	I0915 07:13:05.222396   33084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:13:05.260472   33084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 07:13:05.260553   33084 ssh_runner.go:195] Run: crio --version
	I0915 07:13:05.288513   33084 ssh_runner.go:195] Run: crio --version
	I0915 07:13:05.319719   33084 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 07:13:05.321074   33084 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:13:05.323670   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:05.323969   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:13:05.323989   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:05.324211   33084 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0915 07:13:05.328789   33084 kubeadm.go:883] updating cluster {Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 07:13:05.328912   33084 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:13:05.328953   33084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:13:05.370084   33084 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 07:13:05.370106   33084 crio.go:433] Images already preloaded, skipping extraction
	I0915 07:13:05.370158   33084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:13:05.404993   33084 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 07:13:05.405016   33084 cache_images.go:84] Images are preloaded, skipping loading
	I0915 07:13:05.405026   33084 kubeadm.go:934] updating node { 192.168.39.54 8443 v1.31.1 crio true true} ...
	I0915 07:13:05.405128   33084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-670527 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:13:05.405198   33084 ssh_runner.go:195] Run: crio config
	I0915 07:13:05.451641   33084 cni.go:84] Creating CNI manager for ""
	I0915 07:13:05.451666   33084 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0915 07:13:05.451679   33084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 07:13:05.451706   33084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.54 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-670527 NodeName:ha-670527 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 07:13:05.451870   33084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-670527"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 07:13:05.451894   33084 kube-vip.go:115] generating kube-vip config ...
	I0915 07:13:05.451937   33084 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0915 07:13:05.463397   33084 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0915 07:13:05.463510   33084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0915 07:13:05.463565   33084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 07:13:05.472909   33084 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 07:13:05.472973   33084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0915 07:13:05.481968   33084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0915 07:13:05.498082   33084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:13:05.514194   33084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0915 07:13:05.530060   33084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0915 07:13:05.546188   33084 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0915 07:13:05.550975   33084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:13:05.693252   33084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:13:05.708211   33084 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527 for IP: 192.168.39.54
	I0915 07:13:05.708239   33084 certs.go:194] generating shared ca certs ...
	I0915 07:13:05.708259   33084 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:13:05.708407   33084 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 07:13:05.708456   33084 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 07:13:05.708466   33084 certs.go:256] generating profile certs ...
	I0915 07:13:05.708534   33084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key
	I0915 07:13:05.708559   33084 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.b1a78b36
	I0915 07:13:05.708579   33084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.b1a78b36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.222 192.168.39.4 192.168.39.254]
	I0915 07:13:05.912333   33084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.b1a78b36 ...
	I0915 07:13:05.912366   33084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.b1a78b36: {Name:mkede11e354e48c918d49e89c20f9ce903a7e900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:13:05.912537   33084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.b1a78b36 ...
	I0915 07:13:05.912549   33084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.b1a78b36: {Name:mk78a3b85dd75125c20251506ffc90e14d844b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:13:05.912620   33084 certs.go:381] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.b1a78b36 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt
	I0915 07:13:05.912783   33084 certs.go:385] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.b1a78b36 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key
	I0915 07:13:05.912915   33084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key
	I0915 07:13:05.912929   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0915 07:13:05.912941   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0915 07:13:05.912952   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0915 07:13:05.912965   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0915 07:13:05.912977   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0915 07:13:05.912990   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0915 07:13:05.913004   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0915 07:13:05.913015   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0915 07:13:05.913060   33084 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 07:13:05.913087   33084 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 07:13:05.913099   33084 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:13:05.913132   33084 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:13:05.913166   33084 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:13:05.913191   33084 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 07:13:05.913227   33084 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:13:05.913252   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem -> /usr/share/ca-certificates/13190.pem
	I0915 07:13:05.913264   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /usr/share/ca-certificates/131902.pem
	I0915 07:13:05.913276   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:05.913790   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:13:05.938750   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:13:05.963278   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:13:05.987285   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 07:13:06.011016   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0915 07:13:06.033734   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 07:13:06.057375   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 07:13:06.082317   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 07:13:06.107283   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 07:13:06.132284   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 07:13:06.158120   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:13:06.184190   33084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 07:13:06.203248   33084 ssh_runner.go:195] Run: openssl version
	I0915 07:13:06.209520   33084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 07:13:06.222185   33084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 07:13:06.227208   33084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 07:13:06.227265   33084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 07:13:06.233477   33084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:13:06.246397   33084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:13:06.259265   33084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:06.263929   33084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:06.263991   33084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:06.269767   33084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:13:06.280612   33084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 07:13:06.293003   33084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 07:13:06.297625   33084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 07:13:06.297668   33084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 07:13:06.303513   33084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 07:13:06.314265   33084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:13:06.319018   33084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 07:13:06.324822   33084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 07:13:06.330650   33084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 07:13:06.336471   33084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 07:13:06.342590   33084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 07:13:06.348381   33084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 07:13:06.354020   33084 kubeadm.go:392] StartCluster: {Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:13:06.354163   33084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 07:13:06.354223   33084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 07:13:06.403806   33084 cri.go:89] found id: "16e47676692a4d5f02f13d5b02a137c073367b06fcfd27ef77109ae9fb6a3cb7"
	I0915 07:13:06.403830   33084 cri.go:89] found id: "24aa1acee0351497487e993913a0302b054a83c9ca876b69eb69e59a752f8192"
	I0915 07:13:06.403834   33084 cri.go:89] found id: "47184f847fb9cb6dbb9ea078aca39b32285cc7bfe9227f8cc205519b9f3e0d44"
	I0915 07:13:06.403837   33084 cri.go:89] found id: "fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4"
	I0915 07:13:06.403840   33084 cri.go:89] found id: "489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f"
	I0915 07:13:06.403843   33084 cri.go:89] found id: "606b9d6854130ea502e808cad6f11dd661e42dfaa3855c06ce4f49464137e7b5"
	I0915 07:13:06.403845   33084 cri.go:89] found id: "aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230"
	I0915 07:13:06.403847   33084 cri.go:89] found id: "b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0"
	I0915 07:13:06.403850   33084 cri.go:89] found id: "5733f96a0b004d9b54b8a20e477ff1601388cce8a8dba41a6a696e613170f071"
	I0915 07:13:06.403854   33084 cri.go:89] found id: "bcaf162e8fd08bf60d83f84f21cffe2d324498935459eb640d6286dfd00874cf"
	I0915 07:13:06.403857   33084 cri.go:89] found id: "bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b"
	I0915 07:13:06.403870   33084 cri.go:89] found id: "f3e8e75a7001798077598504da25ac6847fd389b64abaec0551104db70b588b6"
	I0915 07:13:06.403877   33084 cri.go:89] found id: "e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d"
	I0915 07:13:06.403882   33084 cri.go:89] found id: ""
	I0915 07:13:06.403931   33084 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.252624057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384526252599236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d1b9ea2-b3a4-477d-834a-b9d90b557a40 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.253267812Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0174bbe8-e915-489a-a36e-4ee93a151cd5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.253334435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0174bbe8-e915-489a-a36e-4ee93a151cd5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.253717505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c68c2fdc35ba0bfd20d3ed14b58f7548ddb9ceaccb1a339416b2a2378953fc1,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726384465500311828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d41c86e84f1548904c69cb25b15c48a4ec0c7b1c10b5508d2174608b608f7e7,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726384432488462293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01401a2edbfbe0d194c0c88307d5861c53601db8979ec64157381aa45c5bfd2d,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726384430489729237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d2baaf7128462d2cba9d28f3373cf09d184c183b5ded7fa9fe4ad4f9ac35fe,PodSandboxId:4a7d5d0c0a6631b2e5a77fb3bee2ae57fbe77371ddc71ebce1815bcdff817ee3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726384423803361333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf88539ab3da9dc631babe1cb708a2b2f0a90a6cd85b4428b91830cd9cbac63,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726384420491423888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1100b50ab8e9c66344cdddcec22dbf54d91087b98ed6997d970fad28f0c8c9,PodSandboxId:fbc1bf98c6cf740157142a3ac2b5afdc983b7ee5d3cff21dfdecbde3ef204acc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726384402305600567,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36de90b173ab93803a5e185262634eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687491bc79a5922606800e4fe843e0bca1955764e539a7a6c2bb5a4eebfcf62a,PodSandboxId:ffdce8f4087e8051c9f2ef05faa1b0465b98c786b30a970730a3230dd2cf68a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726384391753328410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:2bed002dfeaaf8eed8611413ae113727c57e802c8e93b7716c8337ecc3a05346,PodSandboxId:e8c13324e475bfc88e5defb2df271652a5eaf717748cfd8dc1df499481199b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390640614852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425ce48c344f2a7bfd0483908c366e113d159d2756cac758c87cb75f3245a3d1,PodSandboxId:5f50b24b84ad147e67a4b8e8d0db8a0acf366f124e80a128642778664d333112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726384390551647180,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab831ce85fce75f7b9c6e0630e703b20697d11489e00c73f5ad1f8105a52723,PodSandboxId:e29b47453c420213876a5fc6535dd32f23ed733b58213313433936da9d5d1ec7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726384390505705121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d779b01c4db5393047ba7de2867b1075aa86f443e7203418e2ec025eb71d95b4,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726384390343659135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b860e6d04679a4b8e53fedbb24d6a47886501ba788ecebe75262b49c1428598b,PodSandboxId:1a4733e12bd720e84c5585401a1b5ea92eb1a32cdeaedf19c6b3814e41f76ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390509090854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35fe255a9da1007fa3bc6e442a97627b3685ba52cff7b34c92eebe87eae3f8ad,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726384390403259958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f
06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5509d991aebbe20258eaa37cc4f213711618ca7d721723e2d9e3e87e9740e488,PodSandboxId:1245558aaba5fc05dc94158ce14ce3ad729c9d32862dccd9fde8deb40fe6e798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726384390327562824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726383887395771547,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740121484176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740060699820,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726383727859440072,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726383727654087073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726383716294997339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726383716223595590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0174bbe8-e915-489a-a36e-4ee93a151cd5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.303436106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38c41e97-6da5-4761-973e-8c9e981845d9 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.303565654Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38c41e97-6da5-4761-973e-8c9e981845d9 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.304911071Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24a9acf9-edc9-4523-a56c-f8bea6a1709c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.305662555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384526305628000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24a9acf9-edc9-4523-a56c-f8bea6a1709c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.306550707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b872a77b-7958-4e2e-aa39-4893f0748ecc name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.306659770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b872a77b-7958-4e2e-aa39-4893f0748ecc name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.307528744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c68c2fdc35ba0bfd20d3ed14b58f7548ddb9ceaccb1a339416b2a2378953fc1,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726384465500311828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d41c86e84f1548904c69cb25b15c48a4ec0c7b1c10b5508d2174608b608f7e7,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726384432488462293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01401a2edbfbe0d194c0c88307d5861c53601db8979ec64157381aa45c5bfd2d,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726384430489729237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d2baaf7128462d2cba9d28f3373cf09d184c183b5ded7fa9fe4ad4f9ac35fe,PodSandboxId:4a7d5d0c0a6631b2e5a77fb3bee2ae57fbe77371ddc71ebce1815bcdff817ee3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726384423803361333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf88539ab3da9dc631babe1cb708a2b2f0a90a6cd85b4428b91830cd9cbac63,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726384420491423888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1100b50ab8e9c66344cdddcec22dbf54d91087b98ed6997d970fad28f0c8c9,PodSandboxId:fbc1bf98c6cf740157142a3ac2b5afdc983b7ee5d3cff21dfdecbde3ef204acc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726384402305600567,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36de90b173ab93803a5e185262634eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687491bc79a5922606800e4fe843e0bca1955764e539a7a6c2bb5a4eebfcf62a,PodSandboxId:ffdce8f4087e8051c9f2ef05faa1b0465b98c786b30a970730a3230dd2cf68a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726384391753328410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:2bed002dfeaaf8eed8611413ae113727c57e802c8e93b7716c8337ecc3a05346,PodSandboxId:e8c13324e475bfc88e5defb2df271652a5eaf717748cfd8dc1df499481199b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390640614852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425ce48c344f2a7bfd0483908c366e113d159d2756cac758c87cb75f3245a3d1,PodSandboxId:5f50b24b84ad147e67a4b8e8d0db8a0acf366f124e80a128642778664d333112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726384390551647180,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab831ce85fce75f7b9c6e0630e703b20697d11489e00c73f5ad1f8105a52723,PodSandboxId:e29b47453c420213876a5fc6535dd32f23ed733b58213313433936da9d5d1ec7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726384390505705121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d779b01c4db5393047ba7de2867b1075aa86f443e7203418e2ec025eb71d95b4,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726384390343659135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b860e6d04679a4b8e53fedbb24d6a47886501ba788ecebe75262b49c1428598b,PodSandboxId:1a4733e12bd720e84c5585401a1b5ea92eb1a32cdeaedf19c6b3814e41f76ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390509090854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35fe255a9da1007fa3bc6e442a97627b3685ba52cff7b34c92eebe87eae3f8ad,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726384390403259958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f
06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5509d991aebbe20258eaa37cc4f213711618ca7d721723e2d9e3e87e9740e488,PodSandboxId:1245558aaba5fc05dc94158ce14ce3ad729c9d32862dccd9fde8deb40fe6e798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726384390327562824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726383887395771547,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740121484176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740060699820,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726383727859440072,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726383727654087073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726383716294997339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726383716223595590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b872a77b-7958-4e2e-aa39-4893f0748ecc name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.366946806Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c799b366-b091-4b8b-b204-6552490af5f6 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.367038806Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c799b366-b091-4b8b-b204-6552490af5f6 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.368368290Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4e78ee8-c39f-4562-83d6-176eeb1f80c7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.368788177Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384526368764567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4e78ee8-c39f-4562-83d6-176eeb1f80c7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.369326374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e599e8c8-8ed9-4871-bdaa-19d7dda6201c name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.369401817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e599e8c8-8ed9-4871-bdaa-19d7dda6201c name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.369800530Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c68c2fdc35ba0bfd20d3ed14b58f7548ddb9ceaccb1a339416b2a2378953fc1,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726384465500311828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d41c86e84f1548904c69cb25b15c48a4ec0c7b1c10b5508d2174608b608f7e7,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726384432488462293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01401a2edbfbe0d194c0c88307d5861c53601db8979ec64157381aa45c5bfd2d,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726384430489729237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d2baaf7128462d2cba9d28f3373cf09d184c183b5ded7fa9fe4ad4f9ac35fe,PodSandboxId:4a7d5d0c0a6631b2e5a77fb3bee2ae57fbe77371ddc71ebce1815bcdff817ee3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726384423803361333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf88539ab3da9dc631babe1cb708a2b2f0a90a6cd85b4428b91830cd9cbac63,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726384420491423888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1100b50ab8e9c66344cdddcec22dbf54d91087b98ed6997d970fad28f0c8c9,PodSandboxId:fbc1bf98c6cf740157142a3ac2b5afdc983b7ee5d3cff21dfdecbde3ef204acc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726384402305600567,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36de90b173ab93803a5e185262634eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687491bc79a5922606800e4fe843e0bca1955764e539a7a6c2bb5a4eebfcf62a,PodSandboxId:ffdce8f4087e8051c9f2ef05faa1b0465b98c786b30a970730a3230dd2cf68a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726384391753328410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:2bed002dfeaaf8eed8611413ae113727c57e802c8e93b7716c8337ecc3a05346,PodSandboxId:e8c13324e475bfc88e5defb2df271652a5eaf717748cfd8dc1df499481199b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390640614852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425ce48c344f2a7bfd0483908c366e113d159d2756cac758c87cb75f3245a3d1,PodSandboxId:5f50b24b84ad147e67a4b8e8d0db8a0acf366f124e80a128642778664d333112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726384390551647180,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab831ce85fce75f7b9c6e0630e703b20697d11489e00c73f5ad1f8105a52723,PodSandboxId:e29b47453c420213876a5fc6535dd32f23ed733b58213313433936da9d5d1ec7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726384390505705121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d779b01c4db5393047ba7de2867b1075aa86f443e7203418e2ec025eb71d95b4,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726384390343659135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b860e6d04679a4b8e53fedbb24d6a47886501ba788ecebe75262b49c1428598b,PodSandboxId:1a4733e12bd720e84c5585401a1b5ea92eb1a32cdeaedf19c6b3814e41f76ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390509090854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35fe255a9da1007fa3bc6e442a97627b3685ba52cff7b34c92eebe87eae3f8ad,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726384390403259958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f
06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5509d991aebbe20258eaa37cc4f213711618ca7d721723e2d9e3e87e9740e488,PodSandboxId:1245558aaba5fc05dc94158ce14ce3ad729c9d32862dccd9fde8deb40fe6e798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726384390327562824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726383887395771547,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740121484176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740060699820,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726383727859440072,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726383727654087073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726383716294997339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726383716223595590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e599e8c8-8ed9-4871-bdaa-19d7dda6201c name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.417852767Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=970bff92-bf1e-40a7-aab3-1dbf60aeb861 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.417945409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=970bff92-bf1e-40a7-aab3-1dbf60aeb861 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.419545613Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea4d170c-2462-43cd-8e2c-e17cefc27dbf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.420432522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384526420404620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea4d170c-2462-43cd-8e2c-e17cefc27dbf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.422811122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a47334dd-67fc-4d16-8d7e-c93239b5b88e name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.422891471Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a47334dd-67fc-4d16-8d7e-c93239b5b88e name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:15:26 ha-670527 crio[3570]: time="2024-09-15 07:15:26.423676101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c68c2fdc35ba0bfd20d3ed14b58f7548ddb9ceaccb1a339416b2a2378953fc1,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726384465500311828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d41c86e84f1548904c69cb25b15c48a4ec0c7b1c10b5508d2174608b608f7e7,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726384432488462293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01401a2edbfbe0d194c0c88307d5861c53601db8979ec64157381aa45c5bfd2d,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726384430489729237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d2baaf7128462d2cba9d28f3373cf09d184c183b5ded7fa9fe4ad4f9ac35fe,PodSandboxId:4a7d5d0c0a6631b2e5a77fb3bee2ae57fbe77371ddc71ebce1815bcdff817ee3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726384423803361333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf88539ab3da9dc631babe1cb708a2b2f0a90a6cd85b4428b91830cd9cbac63,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726384420491423888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1100b50ab8e9c66344cdddcec22dbf54d91087b98ed6997d970fad28f0c8c9,PodSandboxId:fbc1bf98c6cf740157142a3ac2b5afdc983b7ee5d3cff21dfdecbde3ef204acc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726384402305600567,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36de90b173ab93803a5e185262634eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687491bc79a5922606800e4fe843e0bca1955764e539a7a6c2bb5a4eebfcf62a,PodSandboxId:ffdce8f4087e8051c9f2ef05faa1b0465b98c786b30a970730a3230dd2cf68a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726384391753328410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:2bed002dfeaaf8eed8611413ae113727c57e802c8e93b7716c8337ecc3a05346,PodSandboxId:e8c13324e475bfc88e5defb2df271652a5eaf717748cfd8dc1df499481199b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390640614852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425ce48c344f2a7bfd0483908c366e113d159d2756cac758c87cb75f3245a3d1,PodSandboxId:5f50b24b84ad147e67a4b8e8d0db8a0acf366f124e80a128642778664d333112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726384390551647180,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab831ce85fce75f7b9c6e0630e703b20697d11489e00c73f5ad1f8105a52723,PodSandboxId:e29b47453c420213876a5fc6535dd32f23ed733b58213313433936da9d5d1ec7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726384390505705121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d779b01c4db5393047ba7de2867b1075aa86f443e7203418e2ec025eb71d95b4,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726384390343659135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b860e6d04679a4b8e53fedbb24d6a47886501ba788ecebe75262b49c1428598b,PodSandboxId:1a4733e12bd720e84c5585401a1b5ea92eb1a32cdeaedf19c6b3814e41f76ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390509090854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35fe255a9da1007fa3bc6e442a97627b3685ba52cff7b34c92eebe87eae3f8ad,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726384390403259958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f
06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5509d991aebbe20258eaa37cc4f213711618ca7d721723e2d9e3e87e9740e488,PodSandboxId:1245558aaba5fc05dc94158ce14ce3ad729c9d32862dccd9fde8deb40fe6e798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726384390327562824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726383887395771547,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740121484176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740060699820,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726383727859440072,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726383727654087073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726383716294997339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726383716223595590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a47334dd-67fc-4d16-8d7e-c93239b5b88e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9c68c2fdc35ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   a942053d741b5       storage-provisioner
	5d41c86e84f15       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   872e08605b415       kube-controller-manager-ha-670527
	01401a2edbfbe       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   290c44145bc70       kube-apiserver-ha-670527
	a6d2baaf71284       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   4a7d5d0c0a663       busybox-7dff88458-rvbkj
	acf88539ab3da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   a942053d741b5       storage-provisioner
	1c1100b50ab8e       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   fbc1bf98c6cf7       kube-vip-ha-670527
	687491bc79a59       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   ffdce8f4087e8       kube-proxy-25xtk
	2bed002dfeaaf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   e8c13324e475b       coredns-7c65d6cfc9-lpj44
	425ce48c344f2       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   5f50b24b84ad1       kindnet-6sqhd
	b860e6d04679a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   1a4733e12bd72       coredns-7c65d6cfc9-4w6x7
	8ab831ce85fce       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   e29b47453c420       etcd-ha-670527
	35fe255a9da10       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   290c44145bc70       kube-apiserver-ha-670527
	d779b01c4db53       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   872e08605b415       kube-controller-manager-ha-670527
	5509d991aebbe       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   1245558aaba5f       kube-scheduler-ha-670527
	1d6d31c8606ff       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   ee2d1970f1e78       busybox-7dff88458-rvbkj
	fde41666d8c29       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   f7b4d1299c815       coredns-7c65d6cfc9-lpj44
	489cc4a0fb63e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   6f3bebb3d80d8       coredns-7c65d6cfc9-4w6x7
	aa6d2372c6ae3       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      13 minutes ago       Exited              kindnet-cni               0                   843991b56a260       kindnet-6sqhd
	b75dfe3b6121c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago       Exited              kube-proxy                0                   594c62a0375e6       kube-proxy-25xtk
	bbb55bff5eb6c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   6e7b02c328479       etcd-ha-670527
	e3475f73ce55b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago       Exited              kube-scheduler            0                   58967292ecf37       kube-scheduler-ha-670527
	
	
	==> coredns [2bed002dfeaaf8eed8611413ae113727c57e802c8e93b7716c8337ecc3a05346] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:56474->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:56474->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35284->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35284->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f] <==
	[INFO] 10.244.2.2:37125 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000203403s
	[INFO] 10.244.2.2:50161 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165008s
	[INFO] 10.244.2.2:56879 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.01294413s
	[INFO] 10.244.2.2:45083 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125708s
	[INFO] 10.244.1.2:52633 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197932s
	[INFO] 10.244.1.2:50573 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001770125s
	[INFO] 10.244.1.2:35701 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180854s
	[INFO] 10.244.1.2:41389 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132037s
	[INFO] 10.244.1.2:58202 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183842s
	[INFO] 10.244.1.2:49817 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109611s
	[INFO] 10.244.0.4:52793 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113159s
	[INFO] 10.244.0.4:38656 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106719s
	[INFO] 10.244.0.4:38122 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061858s
	[INFO] 10.244.2.2:46127 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114243s
	[INFO] 10.244.1.2:54602 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101327s
	[INFO] 10.244.1.2:55582 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124623s
	[INFO] 10.244.0.4:55917 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104871s
	[INFO] 10.244.2.2:41069 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001913s
	[INFO] 10.244.1.2:58958 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140612s
	[INFO] 10.244.1.2:39608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00015189s
	[INFO] 10.244.1.2:40627 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154411s
	[INFO] 10.244.0.4:53377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121746s
	[INFO] 10.244.0.4:52578 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089133s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b860e6d04679a4b8e53fedbb24d6a47886501ba788ecebe75262b49c1428598b] <==
	[INFO] plugin/kubernetes: Trace[873210321]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 07:13:17.056) (total time: 10001ms):
	Trace[873210321]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:13:27.058)
	Trace[873210321]: [10.001536302s] [10.001536302s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[59895605]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 07:13:19.221) (total time: 10001ms):
	Trace[59895605]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (07:13:29.222)
	Trace[59895605]: [10.001009294s] [10.001009294s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37852->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37852->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4] <==
	[INFO] 10.244.2.2:33194 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170028s
	[INFO] 10.244.2.2:40376 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000180655s
	[INFO] 10.244.1.2:52585 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001449553s
	[INFO] 10.244.1.2:53060 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000208928s
	[INFO] 10.244.0.4:56755 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001727309s
	[INFO] 10.244.0.4:60825 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000234694s
	[INFO] 10.244.0.4:58873 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001046398s
	[INFO] 10.244.0.4:42322 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104256s
	[INFO] 10.244.0.4:34109 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000038552s
	[INFO] 10.244.2.2:60809 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124458s
	[INFO] 10.244.2.2:36825 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093407s
	[INFO] 10.244.2.2:56100 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075616s
	[INFO] 10.244.1.2:47124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122782s
	[INFO] 10.244.1.2:55965 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096943s
	[INFO] 10.244.0.4:34915 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120044s
	[INFO] 10.244.0.4:43696 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073334s
	[INFO] 10.244.0.4:59415 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158827s
	[INFO] 10.244.2.2:35148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177137s
	[INFO] 10.244.2.2:58466 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166358s
	[INFO] 10.244.2.2:60740 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000304437s
	[INFO] 10.244.1.2:54984 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149622s
	[INFO] 10.244.0.4:44476 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075065s
	[INFO] 10.244.0.4:37204 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054807s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-670527
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T07_02_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:02:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:15:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:14:00 +0000   Sun, 15 Sep 2024 07:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:14:00 +0000   Sun, 15 Sep 2024 07:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:14:00 +0000   Sun, 15 Sep 2024 07:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:14:00 +0000   Sun, 15 Sep 2024 07:02:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    ha-670527
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4352c21da1154e49b4f2cd8223ef4f22
	  System UUID:                4352c21d-a115-4e49-b4f2-cd8223ef4f22
	  Boot ID:                    28f13bdf-c0fc-4804-9eaa-c62790060557
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rvbkj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-4w6x7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-lpj44             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-670527                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-6sqhd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-670527             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-670527    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-25xtk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-670527             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-670527                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 93s                    kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-670527 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-670527 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-670527 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-670527 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	  Normal   NodeNotReady             2m40s (x2 over 3m5s)   kubelet          Node ha-670527 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m24s (x2 over 3m24s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           90s                    node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	  Normal   RegisteredNode           88s                    node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	  Normal   RegisteredNode           40s                    node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	
	
	Name:               ha-670527-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_02_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:02:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:15:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:14:37 +0000   Sun, 15 Sep 2024 07:13:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:14:37 +0000   Sun, 15 Sep 2024 07:13:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:14:37 +0000   Sun, 15 Sep 2024 07:13:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:14:37 +0000   Sun, 15 Sep 2024 07:13:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-670527-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 937badb420fd46bab8c9040c7d7b213d
	  System UUID:                937badb4-20fd-46ba-b8c9-040c7d7b213d
	  Boot ID:                    5a3946cf-1742-47d7-b935-730fee807ecb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gxwp9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-670527-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-mn54b                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-670527-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-670527-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kt79t                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-670527-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-670527-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  Starting                 72s                  kube-proxy       
	  Normal  Starting                 12m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x3 over 12m)    kubelet          Node ha-670527-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x3 over 12m)    kubelet          Node ha-670527-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x3 over 12m)    kubelet          Node ha-670527-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                  node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  NodeReady                12m                  kubelet          Node ha-670527-m02 status is now: NodeReady
	  Normal  RegisteredNode           11m                  node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  NodeNotReady             8m46s                node-controller  Node ha-670527-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  119s (x8 over 119s)  kubelet          Node ha-670527-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    119s (x8 over 119s)  kubelet          Node ha-670527-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x7 over 119s)  kubelet          Node ha-670527-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  119s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           91s                  node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  RegisteredNode           89s                  node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  RegisteredNode           41s                  node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	
	
	Name:               ha-670527-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_04_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:04:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:15:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:15:04 +0000   Sun, 15 Sep 2024 07:04:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:15:04 +0000   Sun, 15 Sep 2024 07:04:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:15:04 +0000   Sun, 15 Sep 2024 07:04:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:15:04 +0000   Sun, 15 Sep 2024 07:04:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-670527-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 16b4217ec868437981a046051de1bf49
	  System UUID:                16b4217e-c868-4379-81a0-46051de1bf49
	  Boot ID:                    94dfca2f-2d32-4634-b8e0-fce6ff7fb848
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4cgxn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-670527-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-fcgbj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-670527-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-670527-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-mbcxc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-670527-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-670527-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 36s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-670527-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-670527-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-670527-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-670527-m03 event: Registered Node ha-670527-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-670527-m03 event: Registered Node ha-670527-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-670527-m03 event: Registered Node ha-670527-m03 in Controller
	  Normal   RegisteredNode           91s                node-controller  Node ha-670527-m03 event: Registered Node ha-670527-m03 in Controller
	  Normal   RegisteredNode           89s                node-controller  Node ha-670527-m03 event: Registered Node ha-670527-m03 in Controller
	  Normal   Starting                 54s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  54s                kubelet          Node ha-670527-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    54s                kubelet          Node ha-670527-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     54s                kubelet          Node ha-670527-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 54s                kubelet          Node ha-670527-m03 has been rebooted, boot id: 94dfca2f-2d32-4634-b8e0-fce6ff7fb848
	  Normal   RegisteredNode           41s                node-controller  Node ha-670527-m03 event: Registered Node ha-670527-m03 in Controller
	
	
	Name:               ha-670527-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_05_24_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:05:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:15:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:15:18 +0000   Sun, 15 Sep 2024 07:15:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:15:18 +0000   Sun, 15 Sep 2024 07:15:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:15:18 +0000   Sun, 15 Sep 2024 07:15:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:15:18 +0000   Sun, 15 Sep 2024 07:15:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-670527-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 24d136e447f34c399b15050eaf7b094c
	  System UUID:                24d136e4-47f3-4c39-9b15-050eaf7b094c
	  Boot ID:                    228ee5dc-0839-4c73-837a-a187890e2319
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4l8cf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-fq2lt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 9m58s              kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-670527-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-670527-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-670527-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal   NodeReady                9m44s              kubelet          Node ha-670527-m04 status is now: NodeReady
	  Normal   RegisteredNode           91s                node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal   RegisteredNode           89s                node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal   NodeNotReady             51s                node-controller  Node ha-670527-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           41s                node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s (x3 over 9s)    kubelet          Node ha-670527-m04 has been rebooted, boot id: 228ee5dc-0839-4c73-837a-a187890e2319
	  Normal   NodeHasSufficientMemory  9s (x4 over 9s)    kubelet          Node ha-670527-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x4 over 9s)    kubelet          Node ha-670527-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x4 over 9s)    kubelet          Node ha-670527-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             9s                 kubelet          Node ha-670527-m04 status is now: NodeNotReady
	  Normal   NodeReady                9s (x2 over 9s)    kubelet          Node ha-670527-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.438155] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.055236] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053736] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.163785] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.149779] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.293443] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.937975] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.762754] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.062847] kauditd_printk_skb: 158 callbacks suppressed
	[Sep15 07:02] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.107092] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.313948] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.323190] kauditd_printk_skb: 38 callbacks suppressed
	[Sep15 07:03] kauditd_printk_skb: 26 callbacks suppressed
	[Sep15 07:13] systemd-fstab-generator[3495]: Ignoring "noauto" option for root device
	[  +0.154106] systemd-fstab-generator[3507]: Ignoring "noauto" option for root device
	[  +0.170714] systemd-fstab-generator[3521]: Ignoring "noauto" option for root device
	[  +0.148530] systemd-fstab-generator[3533]: Ignoring "noauto" option for root device
	[  +0.278100] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[  +0.716835] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +4.332313] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.209534] kauditd_printk_skb: 97 callbacks suppressed
	[ +35.416650] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [8ab831ce85fce75f7b9c6e0630e703b20697d11489e00c73f5ad1f8105a52723] <==
	{"level":"warn","ts":"2024-09-15T07:14:28.793527Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f153910e35189484","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-15T07:14:31.014739Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.4:2380/version","remote-member-id":"f153910e35189484","error":"Get \"https://192.168.39.4:2380/version\": dial tcp 192.168.39.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-15T07:14:31.014879Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f153910e35189484","error":"Get \"https://192.168.39.4:2380/version\": dial tcp 192.168.39.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-15T07:14:31.691260Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f153910e35189484","rtt":"0s","error":"dial tcp 192.168.39.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-15T07:14:31.693681Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f153910e35189484","rtt":"0s","error":"dial tcp 192.168.39.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-15T07:14:35.017224Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.4:2380/version","remote-member-id":"f153910e35189484","error":"Get \"https://192.168.39.4:2380/version\": dial tcp 192.168.39.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-15T07:14:35.017288Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f153910e35189484","error":"Get \"https://192.168.39.4:2380/version\": dial tcp 192.168.39.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-15T07:14:36.691689Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f153910e35189484","rtt":"0s","error":"dial tcp 192.168.39.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-15T07:14:36.694718Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f153910e35189484","rtt":"0s","error":"dial tcp 192.168.39.4:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-15T07:14:38.847297Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:14:38.847691Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:14:38.847934Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:14:38.867752Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"731f5c40d4af6217","to":"f153910e35189484","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-15T07:14:38.867815Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:14:38.876482Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"731f5c40d4af6217","to":"f153910e35189484","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-15T07:14:38.876547Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:14:43.229036Z","caller":"traceutil/trace.go:171","msg":"trace[742164605] linearizableReadLoop","detail":"{readStateIndex:2738; appliedIndex:2738; }","duration":"183.24583ms","start":"2024-09-15T07:14:43.045763Z","end":"2024-09-15T07:14:43.229008Z","steps":["trace[742164605] 'read index received'  (duration: 183.240586ms)","trace[742164605] 'applied index is now lower than readState.Index'  (duration: 4.003µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T07:14:43.229894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.035924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-09-15T07:14:43.230052Z","caller":"traceutil/trace.go:171","msg":"trace[2065357973] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2351; }","duration":"184.2804ms","start":"2024-09-15T07:14:43.045759Z","end":"2024-09-15T07:14:43.230039Z","steps":["trace[2065357973] 'agreement among raft nodes before linearized reading'  (duration: 183.360055ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:14:43.230421Z","caller":"traceutil/trace.go:171","msg":"trace[1819949652] transaction","detail":"{read_only:false; response_revision:2352; number_of_response:1; }","duration":"186.956008ms","start":"2024-09-15T07:14:43.043450Z","end":"2024-09-15T07:14:43.230406Z","steps":["trace[1819949652] 'process raft request'  (duration: 184.840567ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:15:21.687927Z","caller":"traceutil/trace.go:171","msg":"trace[404736881] linearizableReadLoop","detail":"{readStateIndex:2938; appliedIndex:2938; }","duration":"122.250418ms","start":"2024-09-15T07:15:21.565658Z","end":"2024-09-15T07:15:21.687908Z","steps":["trace[404736881] 'read index received'  (duration: 122.24227ms)","trace[404736881] 'applied index is now lower than readState.Index'  (duration: 6.924µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T07:15:21.688186Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.457504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-15T07:15:21.688213Z","caller":"traceutil/trace.go:171","msg":"trace[2033213668] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:2518; }","duration":"122.567825ms","start":"2024-09-15T07:15:21.565638Z","end":"2024-09-15T07:15:21.688205Z","steps":["trace[2033213668] 'agreement among raft nodes before linearized reading'  (duration: 122.434311ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:15:21.688294Z","caller":"traceutil/trace.go:171","msg":"trace[1968903699] transaction","detail":"{read_only:false; response_revision:2519; number_of_response:1; }","duration":"181.555184ms","start":"2024-09-15T07:15:21.506724Z","end":"2024-09-15T07:15:21.688280Z","steps":["trace[1968903699] 'process raft request'  (duration: 181.209887ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:15:25.881449Z","caller":"traceutil/trace.go:171","msg":"trace[370581060] transaction","detail":"{read_only:false; response_revision:2536; number_of_response:1; }","duration":"172.483275ms","start":"2024-09-15T07:15:25.708951Z","end":"2024-09-15T07:15:25.881434Z","steps":["trace[370581060] 'process raft request'  (duration: 172.394559ms)"],"step_count":1}
	
	
	==> etcd [bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b] <==
	{"level":"warn","ts":"2024-09-15T07:11:32.600880Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-15T07:11:31.725305Z","time spent":"875.566966ms","remote":"127.0.0.1:33056","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:10000 "}
	2024/09/15 07:11:32 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-15T07:11:32.633361Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.54:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T07:11:32.633415Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.54:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-15T07:11:32.635109Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"731f5c40d4af6217","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-15T07:11:32.635500Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f72c4aba89afeca8"}
	{"level":"info","ts":"2024-09-15T07:11:32.635612Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f72c4aba89afeca8"}
	{"level":"info","ts":"2024-09-15T07:11:32.635721Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f72c4aba89afeca8"}
	{"level":"info","ts":"2024-09-15T07:11:32.635936Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8"}
	{"level":"info","ts":"2024-09-15T07:11:32.635993Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8"}
	{"level":"info","ts":"2024-09-15T07:11:32.636039Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8"}
	{"level":"info","ts":"2024-09-15T07:11:32.636103Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f72c4aba89afeca8"}
	{"level":"info","ts":"2024-09-15T07:11:32.636111Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:11:32.636167Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:11:32.636229Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:11:32.636308Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:11:32.636357Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:11:32.636405Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:11:32.636447Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:11:32.638849Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"warn","ts":"2024-09-15T07:11:32.638939Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.923636605s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-15T07:11:32.638979Z","caller":"traceutil/trace.go:171","msg":"trace[1657840785] range","detail":"{range_begin:; range_end:; }","duration":"8.923689869s","start":"2024-09-15T07:11:23.715281Z","end":"2024-09-15T07:11:32.638971Z","steps":["trace[1657840785] 'agreement among raft nodes before linearized reading'  (duration: 8.923635149s)"],"step_count":1}
	{"level":"error","ts":"2024-09-15T07:11:32.639036Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-15T07:11:32.639591Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-09-15T07:11:32.639675Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-670527","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.54:2380"],"advertise-client-urls":["https://192.168.39.54:2379"]}
	
	
	==> kernel <==
	 07:15:27 up 14 min,  0 users,  load average: 0.68, 0.72, 0.40
	Linux ha-670527 5.10.207 #1 SMP Sun Sep 15 04:48:27 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [425ce48c344f2a7bfd0483908c366e113d159d2756cac758c87cb75f3245a3d1] <==
	I0915 07:14:51.823445       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:15:01.827422       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:15:01.827556       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:15:01.827805       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:15:01.827869       1 main.go:299] handling current node
	I0915 07:15:01.827953       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:15:01.828017       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:15:01.828227       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:15:01.828318       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:15:11.823001       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:15:11.823214       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:15:11.823413       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:15:11.823538       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:15:11.823820       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:15:11.823910       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:15:11.824049       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:15:11.824088       1 main.go:299] handling current node
	I0915 07:15:21.824424       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:15:21.824548       1 main.go:299] handling current node
	I0915 07:15:21.825692       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:15:21.826277       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:15:21.826574       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:15:21.826637       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:15:21.826774       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:15:21.826806       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230] <==
	I0915 07:11:09.120706       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:11:09.120826       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:11:09.120994       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:11:09.121046       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:11:09.121261       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:11:09.121358       1 main.go:299] handling current node
	I0915 07:11:09.121389       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:11:09.121439       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:11:19.119244       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:11:19.119328       1 main.go:299] handling current node
	I0915 07:11:19.119358       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:11:19.119396       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:11:19.119554       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:11:19.119578       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:11:19.119640       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:11:19.119658       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:11:29.122264       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:11:29.122346       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:11:29.122502       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:11:29.122525       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:11:29.122582       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:11:29.122601       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:11:29.122682       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:11:29.122702       1 main.go:299] handling current node
	E0915 07:11:30.712956       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	
	
	==> kube-apiserver [01401a2edbfbe0d194c0c88307d5861c53601db8979ec64157381aa45c5bfd2d] <==
	I0915 07:13:52.361006       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0915 07:13:52.361040       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0915 07:13:52.441005       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0915 07:13:52.445919       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0915 07:13:52.445962       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0915 07:13:52.446055       1 shared_informer.go:320] Caches are synced for configmaps
	I0915 07:13:52.446321       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0915 07:13:52.451445       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0915 07:13:52.451481       1 policy_source.go:224] refreshing policies
	I0915 07:13:52.451587       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0915 07:13:52.452027       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0915 07:13:52.459653       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0915 07:13:52.459680       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0915 07:13:52.461304       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0915 07:13:52.461393       1 aggregator.go:171] initial CRD sync complete...
	I0915 07:13:52.461411       1 autoregister_controller.go:144] Starting autoregister controller
	I0915 07:13:52.461417       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0915 07:13:52.461421       1 cache.go:39] Caches are synced for autoregister controller
	W0915 07:13:52.461800       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4]
	I0915 07:13:52.464636       1 controller.go:615] quota admission added evaluator for: endpoints
	I0915 07:13:52.476675       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0915 07:13:52.490072       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0915 07:13:52.533735       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 07:13:53.360206       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0915 07:13:53.910970       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4 192.168.39.54]
	
	
	==> kube-apiserver [35fe255a9da1007fa3bc6e442a97627b3685ba52cff7b34c92eebe87eae3f8ad] <==
	I0915 07:13:11.097204       1 options.go:228] external host was not specified, using 192.168.39.54
	I0915 07:13:11.102400       1 server.go:142] Version: v1.31.1
	I0915 07:13:11.102513       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:13:11.718462       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0915 07:13:11.732279       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0915 07:13:11.744989       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0915 07:13:11.745092       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0915 07:13:11.745733       1 instance.go:232] Using reconciler: lease
	W0915 07:13:31.718734       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0915 07:13:31.718734       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0915 07:13:31.747492       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [5d41c86e84f1548904c69cb25b15c48a4ec0c7b1c10b5508d2174608b608f7e7] <==
	I0915 07:14:09.832301       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="18.777208ms"
	I0915 07:14:09.832616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="61.727µs"
	I0915 07:14:09.876472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="19.101014ms"
	I0915 07:14:09.876810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="64.005µs"
	I0915 07:14:09.933477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.502153ms"
	I0915 07:14:09.933650       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="89.362µs"
	I0915 07:14:16.511954       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.417389ms"
	I0915 07:14:16.512107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="82.536µs"
	I0915 07:14:33.983389       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m03"
	I0915 07:14:34.944222       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.882623ms"
	I0915 07:14:34.944347       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.283µs"
	I0915 07:14:36.282012       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:14:36.311755       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:14:37.623658       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m02"
	I0915 07:14:38.698059       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:14:41.357591       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:14:46.882407       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:14:46.986790       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:14:58.196309       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.102426ms"
	I0915 07:14:58.196654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.972µs"
	I0915 07:15:04.426290       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m03"
	I0915 07:15:18.375923       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-670527-m04"
	I0915 07:15:18.377192       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:15:18.401249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	I0915 07:15:18.654720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	
	
	==> kube-controller-manager [d779b01c4db5393047ba7de2867b1075aa86f443e7203418e2ec025eb71d95b4] <==
	I0915 07:13:11.877095       1 serving.go:386] Generated self-signed cert in-memory
	I0915 07:13:12.742551       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0915 07:13:12.742589       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:13:12.744022       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0915 07:13:12.744724       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0915 07:13:12.744868       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 07:13:12.744978       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0915 07:13:32.753231       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.54:8443/healthz\": dial tcp 192.168.39.54:8443: connect: connection refused"
	
	
	==> kube-proxy [687491bc79a5922606800e4fe843e0bca1955764e539a7a6c2bb5a4eebfcf62a] <==
	E0915 07:13:52.908669       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-670527\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0915 07:13:52.908729       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0915 07:13:52.908793       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 07:13:52.960766       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 07:13:52.960832       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 07:13:52.960915       1 server_linux.go:169] "Using iptables Proxier"
	I0915 07:13:52.963414       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 07:13:52.963715       1 server.go:483] "Version info" version="v1.31.1"
	I0915 07:13:52.963748       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:13:52.965637       1 config.go:199] "Starting service config controller"
	I0915 07:13:52.965700       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 07:13:52.965753       1 config.go:105] "Starting endpoint slice config controller"
	I0915 07:13:52.965773       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 07:13:52.966561       1 config.go:328] "Starting node config controller"
	I0915 07:13:52.966592       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0915 07:13:55.980700       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0915 07:13:55.981401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:13:55.981883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:13:55.981531       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:13:55.982041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:13:55.981598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:13:55.982115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0915 07:13:57.267046       1 shared_informer.go:320] Caches are synced for node config
	I0915 07:13:57.267464       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 07:13:57.466480       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0] <==
	E0915 07:10:27.089505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:30.157685       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:30.159095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:30.159410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:30.159564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:30.160051       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:30.160112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:36.302224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:36.302692       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:36.302736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:36.302819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:36.302343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:36.302871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:45.516538       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:45.516698       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:48.588608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:48.588748       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:48.588914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:48.588973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:11:07.021210       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:11:07.021276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:11:07.020628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:11:07.021549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:11:13.165237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:11:13.165310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [5509d991aebbe20258eaa37cc4f213711618ca7d721723e2d9e3e87e9740e488] <==
	W0915 07:13:42.789864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.54:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0915 07:13:42.789940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.54:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0915 07:13:47.187796       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.54:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0915 07:13:47.187840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.54:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0915 07:13:47.864765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.54:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0915 07:13:47.864886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.54:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0915 07:13:47.937677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.54:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0915 07:13:47.937796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.54:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0915 07:13:48.920330       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.54:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0915 07:13:48.920474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.54:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0915 07:13:49.029346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.54:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0915 07:13:49.029437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.54:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0915 07:13:49.039276       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.54:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0915 07:13:49.039347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.54:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0915 07:13:50.016764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.54:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0915 07:13:50.016888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.54:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0915 07:13:52.380979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 07:13:52.381056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:13:52.416779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 07:13:52.416895       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:13:52.417074       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 07:13:52.417246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 07:13:52.417598       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 07:13:52.417649       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0915 07:14:08.274914       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d] <==
	I0915 07:02:03.481353       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0915 07:04:42.984344       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gxwp9\": pod busybox-7dff88458-gxwp9 is already assigned to node \"ha-670527-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-gxwp9" node="ha-670527-m02"
	E0915 07:04:42.984530       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5fc959e1-a77e-415a-bbea-3dd4303e82d9(default/busybox-7dff88458-gxwp9) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-gxwp9"
	E0915 07:04:42.984580       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gxwp9\": pod busybox-7dff88458-gxwp9 is already assigned to node \"ha-670527-m02\"" pod="default/busybox-7dff88458-gxwp9"
	I0915 07:04:42.984652       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-gxwp9" node="ha-670527-m02"
	E0915 07:05:23.207787       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fq2lt\": pod kube-proxy-fq2lt is already assigned to node \"ha-670527-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fq2lt" node="ha-670527-m04"
	E0915 07:05:23.207903       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 50b6a6aa-70b7-41b5-9554-5fef223d25a4(kube-system/kube-proxy-fq2lt) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fq2lt"
	E0915 07:05:23.207927       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fq2lt\": pod kube-proxy-fq2lt is already assigned to node \"ha-670527-m04\"" pod="kube-system/kube-proxy-fq2lt"
	I0915 07:05:23.207964       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fq2lt" node="ha-670527-m04"
	E0915 07:11:17.519768       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0915 07:11:19.429657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0915 07:11:19.649853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0915 07:11:21.695009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0915 07:11:21.701104       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0915 07:11:21.860304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0915 07:11:22.541338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0915 07:11:23.108508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0915 07:11:24.626023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0915 07:11:26.928063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0915 07:11:27.572956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0915 07:11:27.831582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0915 07:11:28.383274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0915 07:11:28.615806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0915 07:11:30.537002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0915 07:11:32.559719       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 15 07:14:12 ha-670527 kubelet[1303]: E0915 07:14:12.721807    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384452721322167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:14:12 ha-670527 kubelet[1303]: E0915 07:14:12.721835    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384452721322167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:14:22 ha-670527 kubelet[1303]: E0915 07:14:22.724538    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384462724046839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:14:22 ha-670527 kubelet[1303]: E0915 07:14:22.724644    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384462724046839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:14:25 ha-670527 kubelet[1303]: I0915 07:14:25.477044    1303 scope.go:117] "RemoveContainer" containerID="acf88539ab3da9dc631babe1cb708a2b2f0a90a6cd85b4428b91830cd9cbac63"
	Sep 15 07:14:25 ha-670527 kubelet[1303]: I0915 07:14:25.930414    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-rvbkj" podStartSLOduration=581.019025402 podStartE2EDuration="9m43.930375607s" podCreationTimestamp="2024-09-15 07:04:42 +0000 UTC" firstStartedPulling="2024-09-15 07:04:44.468594422 +0000 UTC m=+162.161868233" lastFinishedPulling="2024-09-15 07:04:47.379944627 +0000 UTC m=+165.073218438" observedRunningTime="2024-09-15 07:04:48.206014952 +0000 UTC m=+165.899288785" watchObservedRunningTime="2024-09-15 07:14:25.930375607 +0000 UTC m=+743.623649439"
	Sep 15 07:14:32 ha-670527 kubelet[1303]: E0915 07:14:32.727468    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384472727020543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:14:32 ha-670527 kubelet[1303]: E0915 07:14:32.727514    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384472727020543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:14:42 ha-670527 kubelet[1303]: I0915 07:14:42.477021    1303 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-670527" podUID="3ad87a12-7eca-44cb-8b2f-df38f92d8e4d"
	Sep 15 07:14:42 ha-670527 kubelet[1303]: I0915 07:14:42.496042    1303 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-670527"
	Sep 15 07:14:42 ha-670527 kubelet[1303]: E0915 07:14:42.729798    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384482729094720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:14:42 ha-670527 kubelet[1303]: E0915 07:14:42.729854    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384482729094720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:14:52 ha-670527 kubelet[1303]: E0915 07:14:52.734017    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384492733540658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:14:52 ha-670527 kubelet[1303]: E0915 07:14:52.734043    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384492733540658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:15:02 ha-670527 kubelet[1303]: E0915 07:15:02.496714    1303 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 15 07:15:02 ha-670527 kubelet[1303]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 15 07:15:02 ha-670527 kubelet[1303]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 15 07:15:02 ha-670527 kubelet[1303]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 15 07:15:02 ha-670527 kubelet[1303]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 15 07:15:02 ha-670527 kubelet[1303]: E0915 07:15:02.735466    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384502735037214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:15:02 ha-670527 kubelet[1303]: E0915 07:15:02.735510    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384502735037214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:15:12 ha-670527 kubelet[1303]: E0915 07:15:12.737306    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384512736865362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:15:12 ha-670527 kubelet[1303]: E0915 07:15:12.737586    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384512736865362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:15:22 ha-670527 kubelet[1303]: E0915 07:15:22.738935    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384522738621875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:15:22 ha-670527 kubelet[1303]: E0915 07:15:22.739534    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384522738621875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:15:25.941550   34337 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19644-6166/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-670527 -n ha-670527
helpers_test.go:261: (dbg) Run:  kubectl --context ha-670527 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (358.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 stop -v=7 --alsologtostderr
E0915 07:16:02.684529   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670527 stop -v=7 --alsologtostderr: exit status 82 (2m0.465213106s)

                                                
                                                
-- stdout --
	* Stopping node "ha-670527-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:15:45.122471   34733 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:15:45.122748   34733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:15:45.122759   34733 out.go:358] Setting ErrFile to fd 2...
	I0915 07:15:45.122766   34733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:15:45.122940   34733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:15:45.123204   34733 out.go:352] Setting JSON to false
	I0915 07:15:45.123302   34733 mustload.go:65] Loading cluster: ha-670527
	I0915 07:15:45.123705   34733 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:15:45.123813   34733 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:15:45.124009   34733 mustload.go:65] Loading cluster: ha-670527
	I0915 07:15:45.124165   34733 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:15:45.124201   34733 stop.go:39] StopHost: ha-670527-m04
	I0915 07:15:45.124596   34733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:15:45.124642   34733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:15:45.139458   34733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33709
	I0915 07:15:45.139936   34733 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:15:45.140485   34733 main.go:141] libmachine: Using API Version  1
	I0915 07:15:45.140509   34733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:15:45.140991   34733 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:15:45.143570   34733 out.go:177] * Stopping node "ha-670527-m04"  ...
	I0915 07:15:45.145001   34733 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0915 07:15:45.145024   34733 main.go:141] libmachine: (ha-670527-m04) Calling .DriverName
	I0915 07:15:45.145271   34733 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0915 07:15:45.145295   34733 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHHostname
	I0915 07:15:45.147744   34733 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:15:45.148182   34733 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:15:12 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:15:45.148208   34733 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:15:45.148330   34733 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHPort
	I0915 07:15:45.148492   34733 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHKeyPath
	I0915 07:15:45.148631   34733 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHUsername
	I0915 07:15:45.148739   34733 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m04/id_rsa Username:docker}
	I0915 07:15:45.233669   34733 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0915 07:15:45.288508   34733 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0915 07:15:45.341267   34733 main.go:141] libmachine: Stopping "ha-670527-m04"...
	I0915 07:15:45.341309   34733 main.go:141] libmachine: (ha-670527-m04) Calling .GetState
	I0915 07:15:45.342853   34733 main.go:141] libmachine: (ha-670527-m04) Calling .Stop
	I0915 07:15:45.345935   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 0/120
	I0915 07:15:46.348059   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 1/120
	I0915 07:15:47.349399   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 2/120
	I0915 07:15:48.350636   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 3/120
	I0915 07:15:49.352565   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 4/120
	I0915 07:15:50.354584   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 5/120
	I0915 07:15:51.356416   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 6/120
	I0915 07:15:52.357789   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 7/120
	I0915 07:15:53.359455   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 8/120
	I0915 07:15:54.360806   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 9/120
	I0915 07:15:55.363031   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 10/120
	I0915 07:15:56.364465   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 11/120
	I0915 07:15:57.366394   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 12/120
	I0915 07:15:58.367743   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 13/120
	I0915 07:15:59.370099   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 14/120
	I0915 07:16:00.372167   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 15/120
	I0915 07:16:01.373519   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 16/120
	I0915 07:16:02.374848   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 17/120
	I0915 07:16:03.376219   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 18/120
	I0915 07:16:04.377450   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 19/120
	I0915 07:16:05.379508   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 20/120
	I0915 07:16:06.380908   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 21/120
	I0915 07:16:07.382206   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 22/120
	I0915 07:16:08.383566   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 23/120
	I0915 07:16:09.384784   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 24/120
	I0915 07:16:10.386715   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 25/120
	I0915 07:16:11.388301   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 26/120
	I0915 07:16:12.389767   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 27/120
	I0915 07:16:13.391128   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 28/120
	I0915 07:16:14.392428   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 29/120
	I0915 07:16:15.394651   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 30/120
	I0915 07:16:16.395904   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 31/120
	I0915 07:16:17.397237   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 32/120
	I0915 07:16:18.398454   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 33/120
	I0915 07:16:19.400213   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 34/120
	I0915 07:16:20.402072   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 35/120
	I0915 07:16:21.404798   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 36/120
	I0915 07:16:22.406085   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 37/120
	I0915 07:16:23.407343   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 38/120
	I0915 07:16:24.408707   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 39/120
	I0915 07:16:25.410928   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 40/120
	I0915 07:16:26.412062   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 41/120
	I0915 07:16:27.413286   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 42/120
	I0915 07:16:28.415582   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 43/120
	I0915 07:16:29.417055   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 44/120
	I0915 07:16:30.419073   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 45/120
	I0915 07:16:31.420546   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 46/120
	I0915 07:16:32.421687   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 47/120
	I0915 07:16:33.423219   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 48/120
	I0915 07:16:34.424719   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 49/120
	I0915 07:16:35.426848   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 50/120
	I0915 07:16:36.428724   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 51/120
	I0915 07:16:37.430180   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 52/120
	I0915 07:16:38.432326   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 53/120
	I0915 07:16:39.433568   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 54/120
	I0915 07:16:40.435720   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 55/120
	I0915 07:16:41.437052   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 56/120
	I0915 07:16:42.438498   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 57/120
	I0915 07:16:43.440010   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 58/120
	I0915 07:16:44.442039   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 59/120
	I0915 07:16:45.444199   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 60/120
	I0915 07:16:46.445598   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 61/120
	I0915 07:16:47.446905   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 62/120
	I0915 07:16:48.448614   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 63/120
	I0915 07:16:49.449897   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 64/120
	I0915 07:16:50.451993   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 65/120
	I0915 07:16:51.453623   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 66/120
	I0915 07:16:52.454901   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 67/120
	I0915 07:16:53.456344   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 68/120
	I0915 07:16:54.457700   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 69/120
	I0915 07:16:55.459869   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 70/120
	I0915 07:16:56.461036   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 71/120
	I0915 07:16:57.462628   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 72/120
	I0915 07:16:58.463920   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 73/120
	I0915 07:16:59.465870   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 74/120
	I0915 07:17:00.467690   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 75/120
	I0915 07:17:01.469010   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 76/120
	I0915 07:17:02.470422   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 77/120
	I0915 07:17:03.472425   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 78/120
	I0915 07:17:04.473789   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 79/120
	I0915 07:17:05.475758   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 80/120
	I0915 07:17:06.476892   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 81/120
	I0915 07:17:07.478199   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 82/120
	I0915 07:17:08.480184   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 83/120
	I0915 07:17:09.481561   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 84/120
	I0915 07:17:10.483393   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 85/120
	I0915 07:17:11.484639   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 86/120
	I0915 07:17:12.486487   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 87/120
	I0915 07:17:13.487703   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 88/120
	I0915 07:17:14.488892   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 89/120
	I0915 07:17:15.490953   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 90/120
	I0915 07:17:16.492267   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 91/120
	I0915 07:17:17.493486   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 92/120
	I0915 07:17:18.494834   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 93/120
	I0915 07:17:19.496404   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 94/120
	I0915 07:17:20.498223   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 95/120
	I0915 07:17:21.500285   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 96/120
	I0915 07:17:22.501728   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 97/120
	I0915 07:17:23.502912   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 98/120
	I0915 07:17:24.504259   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 99/120
	I0915 07:17:25.506337   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 100/120
	I0915 07:17:26.508130   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 101/120
	I0915 07:17:27.510394   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 102/120
	I0915 07:17:28.511738   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 103/120
	I0915 07:17:29.513322   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 104/120
	I0915 07:17:30.515119   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 105/120
	I0915 07:17:31.516501   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 106/120
	I0915 07:17:32.517689   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 107/120
	I0915 07:17:33.519385   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 108/120
	I0915 07:17:34.520744   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 109/120
	I0915 07:17:35.522876   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 110/120
	I0915 07:17:36.524231   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 111/120
	I0915 07:17:37.525665   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 112/120
	I0915 07:17:38.527570   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 113/120
	I0915 07:17:39.529129   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 114/120
	I0915 07:17:40.531083   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 115/120
	I0915 07:17:41.532268   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 116/120
	I0915 07:17:42.533855   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 117/120
	I0915 07:17:43.535317   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 118/120
	I0915 07:17:44.536619   34733 main.go:141] libmachine: (ha-670527-m04) Waiting for machine to stop 119/120
	I0915 07:17:45.537642   34733 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0915 07:17:45.537695   34733 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0915 07:17:45.539995   34733 out.go:201] 
	W0915 07:17:45.541316   34733 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0915 07:17:45.541333   34733 out.go:270] * 
	* 
	W0915 07:17:45.543407   34733 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 07:17:45.544759   34733 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-670527 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
E0915 07:17:56.199469   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr: exit status 3 (19.07983199s)

                                                
                                                
-- stdout --
	ha-670527
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670527-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:17:45.590943   35567 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:17:45.591223   35567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:17:45.591233   35567 out.go:358] Setting ErrFile to fd 2...
	I0915 07:17:45.591238   35567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:17:45.591396   35567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:17:45.591559   35567 out.go:352] Setting JSON to false
	I0915 07:17:45.591590   35567 mustload.go:65] Loading cluster: ha-670527
	I0915 07:17:45.591708   35567 notify.go:220] Checking for updates...
	I0915 07:17:45.592108   35567 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:17:45.592132   35567 status.go:255] checking status of ha-670527 ...
	I0915 07:17:45.592732   35567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:17:45.592774   35567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:17:45.609130   35567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
	I0915 07:17:45.609625   35567 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:17:45.610296   35567 main.go:141] libmachine: Using API Version  1
	I0915 07:17:45.610321   35567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:17:45.610708   35567 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:17:45.610939   35567 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:17:45.612538   35567 status.go:330] ha-670527 host status = "Running" (err=<nil>)
	I0915 07:17:45.612554   35567 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:17:45.612899   35567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:17:45.612941   35567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:17:45.627395   35567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I0915 07:17:45.628023   35567 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:17:45.628552   35567 main.go:141] libmachine: Using API Version  1
	I0915 07:17:45.628577   35567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:17:45.628883   35567 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:17:45.629066   35567 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:17:45.632164   35567 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:17:45.632595   35567 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:17:45.632620   35567 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:17:45.632789   35567 host.go:66] Checking if "ha-670527" exists ...
	I0915 07:17:45.633068   35567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:17:45.633104   35567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:17:45.648062   35567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I0915 07:17:45.648520   35567 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:17:45.648995   35567 main.go:141] libmachine: Using API Version  1
	I0915 07:17:45.649016   35567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:17:45.649337   35567 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:17:45.649483   35567 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:17:45.649666   35567 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:17:45.649686   35567 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:17:45.652173   35567 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:17:45.652593   35567 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:17:45.652624   35567 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:17:45.652746   35567 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:17:45.652896   35567 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:17:45.653038   35567 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:17:45.653167   35567 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:17:45.742597   35567 ssh_runner.go:195] Run: systemctl --version
	I0915 07:17:45.748949   35567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:17:45.766328   35567 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:17:45.766356   35567 api_server.go:166] Checking apiserver status ...
	I0915 07:17:45.766385   35567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:17:45.781960   35567 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4953/cgroup
	W0915 07:17:45.792562   35567 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4953/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:17:45.792609   35567 ssh_runner.go:195] Run: ls
	I0915 07:17:45.796894   35567 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:17:45.801301   35567 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:17:45.801321   35567 status.go:422] ha-670527 apiserver status = Running (err=<nil>)
	I0915 07:17:45.801334   35567 status.go:257] ha-670527 status: &{Name:ha-670527 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:17:45.801354   35567 status.go:255] checking status of ha-670527-m02 ...
	I0915 07:17:45.801674   35567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:17:45.801713   35567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:17:45.816782   35567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
	I0915 07:17:45.817182   35567 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:17:45.817662   35567 main.go:141] libmachine: Using API Version  1
	I0915 07:17:45.817688   35567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:17:45.817999   35567 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:17:45.818179   35567 main.go:141] libmachine: (ha-670527-m02) Calling .GetState
	I0915 07:17:45.819427   35567 status.go:330] ha-670527-m02 host status = "Running" (err=<nil>)
	I0915 07:17:45.819444   35567 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:17:45.819717   35567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:17:45.819751   35567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:17:45.834224   35567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36651
	I0915 07:17:45.834577   35567 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:17:45.835046   35567 main.go:141] libmachine: Using API Version  1
	I0915 07:17:45.835070   35567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:17:45.835420   35567 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:17:45.835610   35567 main.go:141] libmachine: (ha-670527-m02) Calling .GetIP
	I0915 07:17:45.838075   35567 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:17:45.838546   35567 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:13:17 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:17:45.838575   35567 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:17:45.838737   35567 host.go:66] Checking if "ha-670527-m02" exists ...
	I0915 07:17:45.839017   35567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:17:45.839048   35567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:17:45.853515   35567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I0915 07:17:45.853999   35567 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:17:45.854477   35567 main.go:141] libmachine: Using API Version  1
	I0915 07:17:45.854491   35567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:17:45.854731   35567 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:17:45.854886   35567 main.go:141] libmachine: (ha-670527-m02) Calling .DriverName
	I0915 07:17:45.855024   35567 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:17:45.855041   35567 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHHostname
	I0915 07:17:45.857620   35567 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:17:45.858084   35567 main.go:141] libmachine: (ha-670527-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:e6:7b", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:13:17 +0000 UTC Type:0 Mac:52:54:00:5d:e6:7b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-670527-m02 Clientid:01:52:54:00:5d:e6:7b}
	I0915 07:17:45.858133   35567 main.go:141] libmachine: (ha-670527-m02) DBG | domain ha-670527-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:5d:e6:7b in network mk-ha-670527
	I0915 07:17:45.858265   35567 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHPort
	I0915 07:17:45.858408   35567 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHKeyPath
	I0915 07:17:45.858544   35567 main.go:141] libmachine: (ha-670527-m02) Calling .GetSSHUsername
	I0915 07:17:45.858652   35567 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m02/id_rsa Username:docker}
	I0915 07:17:45.950997   35567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:17:45.969541   35567 kubeconfig.go:125] found "ha-670527" server: "https://192.168.39.254:8443"
	I0915 07:17:45.969577   35567 api_server.go:166] Checking apiserver status ...
	I0915 07:17:45.969621   35567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:17:45.987248   35567 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1352/cgroup
	W0915 07:17:45.997451   35567 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1352/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:17:45.997498   35567 ssh_runner.go:195] Run: ls
	I0915 07:17:46.002218   35567 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0915 07:17:46.006863   35567 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0915 07:17:46.006887   35567 status.go:422] ha-670527-m02 apiserver status = Running (err=<nil>)
	I0915 07:17:46.006897   35567 status.go:257] ha-670527-m02 status: &{Name:ha-670527-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:17:46.006923   35567 status.go:255] checking status of ha-670527-m04 ...
	I0915 07:17:46.007244   35567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:17:46.007288   35567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:17:46.022227   35567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38613
	I0915 07:17:46.022656   35567 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:17:46.023169   35567 main.go:141] libmachine: Using API Version  1
	I0915 07:17:46.023190   35567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:17:46.023513   35567 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:17:46.023684   35567 main.go:141] libmachine: (ha-670527-m04) Calling .GetState
	I0915 07:17:46.025099   35567 status.go:330] ha-670527-m04 host status = "Running" (err=<nil>)
	I0915 07:17:46.025115   35567 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:17:46.025380   35567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:17:46.025418   35567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:17:46.040549   35567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33909
	I0915 07:17:46.040943   35567 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:17:46.041395   35567 main.go:141] libmachine: Using API Version  1
	I0915 07:17:46.041421   35567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:17:46.041702   35567 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:17:46.041891   35567 main.go:141] libmachine: (ha-670527-m04) Calling .GetIP
	I0915 07:17:46.044719   35567 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:17:46.045111   35567 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:15:12 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:17:46.045130   35567 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:17:46.045357   35567 host.go:66] Checking if "ha-670527-m04" exists ...
	I0915 07:17:46.045652   35567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:17:46.045694   35567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:17:46.060226   35567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0915 07:17:46.060609   35567 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:17:46.061023   35567 main.go:141] libmachine: Using API Version  1
	I0915 07:17:46.061043   35567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:17:46.061413   35567 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:17:46.061571   35567 main.go:141] libmachine: (ha-670527-m04) Calling .DriverName
	I0915 07:17:46.061734   35567 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:17:46.061751   35567 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHHostname
	I0915 07:17:46.064388   35567 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:17:46.064812   35567 main.go:141] libmachine: (ha-670527-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:dd:09", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:15:12 +0000 UTC Type:0 Mac:52:54:00:a7:dd:09 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-670527-m04 Clientid:01:52:54:00:a7:dd:09}
	I0915 07:17:46.064847   35567 main.go:141] libmachine: (ha-670527-m04) DBG | domain ha-670527-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:a7:dd:09 in network mk-ha-670527
	I0915 07:17:46.064957   35567 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHPort
	I0915 07:17:46.065127   35567 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHKeyPath
	I0915 07:17:46.065277   35567 main.go:141] libmachine: (ha-670527-m04) Calling .GetSSHUsername
	I0915 07:17:46.065405   35567 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527-m04/id_rsa Username:docker}
	W0915 07:18:04.626033   35567 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0915 07:18:04.626153   35567 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0915 07:18:04.626178   35567 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0915 07:18:04.626187   35567 status.go:257] ha-670527-m04 status: &{Name:ha-670527-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0915 07:18:04.626221   35567 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-670527 -n ha-670527
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-670527 logs -n 25: (1.69425446s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-670527 ssh -n ha-670527-m02 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | /home/docker/cp-test_ha-670527-m03_ha-670527-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m03:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m04:/home/docker/cp-test_ha-670527-m03_ha-670527-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | ha-670527-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m04 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:05 UTC |
	|         | /home/docker/cp-test_ha-670527-m03_ha-670527-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-670527 cp testdata/cp-test.txt                                                | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:05 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2302607583/001/cp-test_ha-670527-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527:/home/docker/cp-test_ha-670527-m04_ha-670527.txt                       |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527 sudo cat                                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-670527-m04_ha-670527.txt                                 |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m02:/home/docker/cp-test_ha-670527-m04_ha-670527-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m02 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-670527-m04_ha-670527-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m03:/home/docker/cp-test_ha-670527-m04_ha-670527-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n                                                                 | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-670527-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-670527 ssh -n ha-670527-m03 sudo cat                                          | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-670527-m04_ha-670527-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-670527 node stop m02 -v=7                                                     | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-670527 node start m02 -v=7                                                    | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-670527 -v=7                                                           | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:09 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-670527 -v=7                                                                | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:09 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-670527 --wait=true -v=7                                                    | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:11 UTC | 15 Sep 24 07:15 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-670527                                                                | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:15 UTC |                     |
	| node    | ha-670527 node delete m03 -v=7                                                   | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:15 UTC | 15 Sep 24 07:15 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-670527 stop -v=7                                                              | ha-670527 | jenkins | v1.34.0 | 15 Sep 24 07:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 07:11:31
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 07:11:31.706810   33084 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:11:31.706924   33084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:11:31.706935   33084 out.go:358] Setting ErrFile to fd 2...
	I0915 07:11:31.706939   33084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:11:31.707158   33084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:11:31.707748   33084 out.go:352] Setting JSON to false
	I0915 07:11:31.708686   33084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3238,"bootTime":1726381054,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 07:11:31.708786   33084 start.go:139] virtualization: kvm guest
	I0915 07:11:31.711320   33084 out.go:177] * [ha-670527] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 07:11:31.712733   33084 notify.go:220] Checking for updates...
	I0915 07:11:31.712750   33084 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:11:31.714227   33084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:11:31.715816   33084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:11:31.717238   33084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:11:31.718412   33084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 07:11:31.719906   33084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:11:31.721887   33084 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:11:31.722016   33084 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:11:31.722518   33084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:11:31.722563   33084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:11:31.739353   33084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43097
	I0915 07:11:31.739829   33084 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:11:31.740400   33084 main.go:141] libmachine: Using API Version  1
	I0915 07:11:31.740424   33084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:11:31.740742   33084 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:11:31.740918   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:11:31.777801   33084 out.go:177] * Using the kvm2 driver based on existing profile
	I0915 07:11:31.779445   33084 start.go:297] selected driver: kvm2
	I0915 07:11:31.779458   33084 start.go:901] validating driver "kvm2" against &{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:11:31.779612   33084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:11:31.779918   33084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:11:31.780012   33084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 07:11:31.794937   33084 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 07:11:31.795732   33084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:11:31.795772   33084 cni.go:84] Creating CNI manager for ""
	I0915 07:11:31.795832   33084 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0915 07:11:31.795907   33084 start.go:340] cluster config:
	{Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:11:31.796042   33084 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:11:31.797881   33084 out.go:177] * Starting "ha-670527" primary control-plane node in "ha-670527" cluster
	I0915 07:11:31.799222   33084 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:11:31.799264   33084 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 07:11:31.799282   33084 cache.go:56] Caching tarball of preloaded images
	I0915 07:11:31.799391   33084 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:11:31.799406   33084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:11:31.799512   33084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/config.json ...
	I0915 07:11:31.799729   33084 start.go:360] acquireMachinesLock for ha-670527: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:11:31.799767   33084 start.go:364] duration metric: took 22.439µs to acquireMachinesLock for "ha-670527"
	I0915 07:11:31.799789   33084 start.go:96] Skipping create...Using existing machine configuration
	I0915 07:11:31.799799   33084 fix.go:54] fixHost starting: 
	I0915 07:11:31.800037   33084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:11:31.800107   33084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:11:31.814212   33084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38909
	I0915 07:11:31.814617   33084 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:11:31.815084   33084 main.go:141] libmachine: Using API Version  1
	I0915 07:11:31.815107   33084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:11:31.815447   33084 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:11:31.815674   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:11:31.815843   33084 main.go:141] libmachine: (ha-670527) Calling .GetState
	I0915 07:11:31.817793   33084 fix.go:112] recreateIfNeeded on ha-670527: state=Running err=<nil>
	W0915 07:11:31.817847   33084 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 07:11:31.820089   33084 out.go:177] * Updating the running kvm2 "ha-670527" VM ...
	I0915 07:11:31.821448   33084 machine.go:93] provisionDockerMachine start ...
	I0915 07:11:31.821477   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:11:31.821719   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:11:31.824223   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:31.824627   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:11:31.824646   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:31.824824   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:11:31.824992   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:31.825121   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:31.825266   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:11:31.825410   33084 main.go:141] libmachine: Using SSH client type: native
	I0915 07:11:31.825591   33084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:11:31.825600   33084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 07:11:31.939524   33084 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-670527
	
	I0915 07:11:31.939551   33084 main.go:141] libmachine: (ha-670527) Calling .GetMachineName
	I0915 07:11:31.939768   33084 buildroot.go:166] provisioning hostname "ha-670527"
	I0915 07:11:31.939796   33084 main.go:141] libmachine: (ha-670527) Calling .GetMachineName
	I0915 07:11:31.939993   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:11:31.942790   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:31.943157   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:11:31.943192   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:31.943252   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:11:31.943413   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:31.943547   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:31.943713   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:11:31.943859   33084 main.go:141] libmachine: Using SSH client type: native
	I0915 07:11:31.944040   33084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:11:31.944055   33084 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-670527 && echo "ha-670527" | sudo tee /etc/hostname
	I0915 07:11:32.070709   33084 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-670527
	
	I0915 07:11:32.070736   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:11:32.073646   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.074092   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:11:32.074127   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.074317   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:11:32.074484   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:32.074618   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:32.074709   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:11:32.074884   33084 main.go:141] libmachine: Using SSH client type: native
	I0915 07:11:32.075093   33084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:11:32.075109   33084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-670527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-670527/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-670527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:11:32.186624   33084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:11:32.186653   33084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 07:11:32.186683   33084 buildroot.go:174] setting up certificates
	I0915 07:11:32.186695   33084 provision.go:84] configureAuth start
	I0915 07:11:32.186712   33084 main.go:141] libmachine: (ha-670527) Calling .GetMachineName
	I0915 07:11:32.186977   33084 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:11:32.189932   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.190348   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:11:32.190368   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.190491   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:11:32.192688   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.193176   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:11:32.193221   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.193360   33084 provision.go:143] copyHostCerts
	I0915 07:11:32.193391   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:11:32.193444   33084 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 07:11:32.193455   33084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:11:32.193534   33084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 07:11:32.193652   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:11:32.193676   33084 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 07:11:32.193683   33084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:11:32.193727   33084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 07:11:32.193802   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:11:32.193842   33084 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 07:11:32.193849   33084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:11:32.193886   33084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 07:11:32.193961   33084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.ha-670527 san=[127.0.0.1 192.168.39.54 ha-670527 localhost minikube]
	I0915 07:11:32.267105   33084 provision.go:177] copyRemoteCerts
	I0915 07:11:32.267162   33084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:11:32.267197   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:11:32.269655   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.269973   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:11:32.269993   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.270173   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:11:32.270357   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:32.270506   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:11:32.270623   33084 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:11:32.357380   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0915 07:11:32.357445   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0915 07:11:32.384500   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0915 07:11:32.384581   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 07:11:32.412234   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0915 07:11:32.412296   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:11:32.438757   33084 provision.go:87] duration metric: took 252.046876ms to configureAuth
	I0915 07:11:32.438800   33084 buildroot.go:189] setting minikube options for container-runtime
	I0915 07:11:32.439035   33084 config.go:182] Loaded profile config "ha-670527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:11:32.439120   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:11:32.441659   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.442060   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:11:32.442096   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:11:32.442236   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:11:32.442410   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:32.442595   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:11:32.442767   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:11:32.442919   33084 main.go:141] libmachine: Using SSH client type: native
	I0915 07:11:32.443083   33084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:11:32.443099   33084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:13:03.392164   33084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:13:03.392192   33084 machine.go:96] duration metric: took 1m31.570726721s to provisionDockerMachine
	I0915 07:13:03.392206   33084 start.go:293] postStartSetup for "ha-670527" (driver="kvm2")
	I0915 07:13:03.392221   33084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:13:03.392239   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:13:03.392512   33084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:13:03.392541   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:13:03.395645   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.396127   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:13:03.396152   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.396268   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:13:03.396453   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:13:03.396590   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:13:03.396740   33084 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:13:03.483366   33084 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:13:03.487615   33084 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 07:13:03.487635   33084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 07:13:03.487697   33084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 07:13:03.487764   33084 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 07:13:03.487773   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /etc/ssl/certs/131902.pem
	I0915 07:13:03.487850   33084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:13:03.498784   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:13:03.525279   33084 start.go:296] duration metric: took 133.058231ms for postStartSetup
	I0915 07:13:03.525317   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:13:03.525584   33084 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0915 07:13:03.525611   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:13:03.528215   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.528598   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:13:03.528623   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.528792   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:13:03.528953   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:13:03.529090   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:13:03.529184   33084 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	W0915 07:13:03.612230   33084 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0915 07:13:03.612259   33084 fix.go:56] duration metric: took 1m31.812453347s for fixHost
	I0915 07:13:03.612279   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:13:03.614686   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.615032   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:13:03.615056   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.615142   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:13:03.615320   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:13:03.615467   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:13:03.615579   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:13:03.615723   33084 main.go:141] libmachine: Using SSH client type: native
	I0915 07:13:03.615942   33084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0915 07:13:03.615957   33084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 07:13:03.722898   33084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726384383.689004182
	
	I0915 07:13:03.722921   33084 fix.go:216] guest clock: 1726384383.689004182
	I0915 07:13:03.722928   33084 fix.go:229] Guest: 2024-09-15 07:13:03.689004182 +0000 UTC Remote: 2024-09-15 07:13:03.612265191 +0000 UTC m=+91.940511290 (delta=76.738991ms)
	I0915 07:13:03.722946   33084 fix.go:200] guest clock delta is within tolerance: 76.738991ms
	I0915 07:13:03.722950   33084 start.go:83] releasing machines lock for "ha-670527", held for 1m31.923174723s
	I0915 07:13:03.722966   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:13:03.723206   33084 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:13:03.725749   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.726120   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:13:03.726146   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.726295   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:13:03.726865   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:13:03.727002   33084 main.go:141] libmachine: (ha-670527) Calling .DriverName
	I0915 07:13:03.727116   33084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:13:03.727174   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:13:03.727203   33084 ssh_runner.go:195] Run: cat /version.json
	I0915 07:13:03.727222   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHHostname
	I0915 07:13:03.729720   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.729925   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.730114   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:13:03.730136   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.730271   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:13:03.730408   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:13:03.730412   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:13:03.730428   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:03.730571   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:13:03.730600   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHPort
	I0915 07:13:03.730729   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHKeyPath
	I0915 07:13:03.730725   33084 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:13:03.730881   33084 main.go:141] libmachine: (ha-670527) Calling .GetSSHUsername
	I0915 07:13:03.731004   33084 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/ha-670527/id_rsa Username:docker}
	I0915 07:13:03.811066   33084 ssh_runner.go:195] Run: systemctl --version
	I0915 07:13:03.836783   33084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:13:04.000559   33084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 07:13:04.006800   33084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 07:13:04.006866   33084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:13:04.016075   33084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0915 07:13:04.016097   33084 start.go:495] detecting cgroup driver to use...
	I0915 07:13:04.016161   33084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:13:04.032919   33084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:13:04.046899   33084 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:13:04.046986   33084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:13:04.060918   33084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:13:04.074522   33084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:13:04.226393   33084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:13:04.373125   33084 docker.go:233] disabling docker service ...
	I0915 07:13:04.373198   33084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:13:04.389408   33084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:13:04.402547   33084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:13:04.546824   33084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:13:04.693119   33084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:13:04.707066   33084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:13:04.726435   33084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:13:04.726504   33084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:04.737343   33084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:13:04.737420   33084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:04.747738   33084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:04.757880   33084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:04.768174   33084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:13:04.779364   33084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:04.789706   33084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:04.800722   33084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:04.811010   33084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:13:04.820531   33084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:13:04.830431   33084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:13:04.976537   33084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:13:05.210348   33084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:13:05.210439   33084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:13:05.218528   33084 start.go:563] Will wait 60s for crictl version
	I0915 07:13:05.218577   33084 ssh_runner.go:195] Run: which crictl
	I0915 07:13:05.222396   33084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:13:05.260472   33084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 07:13:05.260553   33084 ssh_runner.go:195] Run: crio --version
	I0915 07:13:05.288513   33084 ssh_runner.go:195] Run: crio --version
	I0915 07:13:05.319719   33084 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 07:13:05.321074   33084 main.go:141] libmachine: (ha-670527) Calling .GetIP
	I0915 07:13:05.323670   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:05.323969   33084 main.go:141] libmachine: (ha-670527) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:49:88", ip: ""} in network mk-ha-670527: {Iface:virbr1 ExpiryTime:2024-09-15 08:01:36 +0000 UTC Type:0 Mac:52:54:00:c3:49:88 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-670527 Clientid:01:52:54:00:c3:49:88}
	I0915 07:13:05.323989   33084 main.go:141] libmachine: (ha-670527) DBG | domain ha-670527 has defined IP address 192.168.39.54 and MAC address 52:54:00:c3:49:88 in network mk-ha-670527
	I0915 07:13:05.324211   33084 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0915 07:13:05.328789   33084 kubeadm.go:883] updating cluster {Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 07:13:05.328912   33084 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:13:05.328953   33084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:13:05.370084   33084 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 07:13:05.370106   33084 crio.go:433] Images already preloaded, skipping extraction
	I0915 07:13:05.370158   33084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:13:05.404993   33084 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 07:13:05.405016   33084 cache_images.go:84] Images are preloaded, skipping loading
	I0915 07:13:05.405026   33084 kubeadm.go:934] updating node { 192.168.39.54 8443 v1.31.1 crio true true} ...
	I0915 07:13:05.405128   33084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-670527 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:13:05.405198   33084 ssh_runner.go:195] Run: crio config
	I0915 07:13:05.451641   33084 cni.go:84] Creating CNI manager for ""
	I0915 07:13:05.451666   33084 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0915 07:13:05.451679   33084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 07:13:05.451706   33084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.54 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-670527 NodeName:ha-670527 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 07:13:05.451870   33084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-670527"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 07:13:05.451894   33084 kube-vip.go:115] generating kube-vip config ...
	I0915 07:13:05.451937   33084 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0915 07:13:05.463397   33084 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0915 07:13:05.463510   33084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0915 07:13:05.463565   33084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 07:13:05.472909   33084 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 07:13:05.472973   33084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0915 07:13:05.481968   33084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0915 07:13:05.498082   33084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:13:05.514194   33084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0915 07:13:05.530060   33084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0915 07:13:05.546188   33084 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0915 07:13:05.550975   33084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:13:05.693252   33084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:13:05.708211   33084 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527 for IP: 192.168.39.54
	I0915 07:13:05.708239   33084 certs.go:194] generating shared ca certs ...
	I0915 07:13:05.708259   33084 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:13:05.708407   33084 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 07:13:05.708456   33084 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 07:13:05.708466   33084 certs.go:256] generating profile certs ...
	I0915 07:13:05.708534   33084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/client.key
	I0915 07:13:05.708559   33084 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.b1a78b36
	I0915 07:13:05.708579   33084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.b1a78b36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.222 192.168.39.4 192.168.39.254]
	I0915 07:13:05.912333   33084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.b1a78b36 ...
	I0915 07:13:05.912366   33084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.b1a78b36: {Name:mkede11e354e48c918d49e89c20f9ce903a7e900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:13:05.912537   33084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.b1a78b36 ...
	I0915 07:13:05.912549   33084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.b1a78b36: {Name:mk78a3b85dd75125c20251506ffc90e14d844b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:13:05.912620   33084 certs.go:381] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt.b1a78b36 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt
	I0915 07:13:05.912783   33084 certs.go:385] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key.b1a78b36 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key
	I0915 07:13:05.912915   33084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key
	I0915 07:13:05.912929   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0915 07:13:05.912941   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0915 07:13:05.912952   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0915 07:13:05.912965   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0915 07:13:05.912977   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0915 07:13:05.912990   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0915 07:13:05.913004   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0915 07:13:05.913015   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0915 07:13:05.913060   33084 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 07:13:05.913087   33084 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 07:13:05.913099   33084 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:13:05.913132   33084 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:13:05.913166   33084 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:13:05.913191   33084 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 07:13:05.913227   33084 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:13:05.913252   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem -> /usr/share/ca-certificates/13190.pem
	I0915 07:13:05.913264   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /usr/share/ca-certificates/131902.pem
	I0915 07:13:05.913276   33084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:05.913790   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:13:05.938750   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:13:05.963278   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:13:05.987285   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 07:13:06.011016   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0915 07:13:06.033734   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 07:13:06.057375   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 07:13:06.082317   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/ha-670527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 07:13:06.107283   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 07:13:06.132284   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 07:13:06.158120   33084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:13:06.184190   33084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 07:13:06.203248   33084 ssh_runner.go:195] Run: openssl version
	I0915 07:13:06.209520   33084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 07:13:06.222185   33084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 07:13:06.227208   33084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 07:13:06.227265   33084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 07:13:06.233477   33084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:13:06.246397   33084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:13:06.259265   33084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:06.263929   33084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:06.263991   33084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:06.269767   33084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:13:06.280612   33084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 07:13:06.293003   33084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 07:13:06.297625   33084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 07:13:06.297668   33084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 07:13:06.303513   33084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 07:13:06.314265   33084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:13:06.319018   33084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 07:13:06.324822   33084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 07:13:06.330650   33084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 07:13:06.336471   33084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 07:13:06.342590   33084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 07:13:06.348381   33084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 07:13:06.354020   33084 kubeadm.go:392] StartCluster: {Name:ha-670527 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-670527 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:13:06.354163   33084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 07:13:06.354223   33084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 07:13:06.403806   33084 cri.go:89] found id: "16e47676692a4d5f02f13d5b02a137c073367b06fcfd27ef77109ae9fb6a3cb7"
	I0915 07:13:06.403830   33084 cri.go:89] found id: "24aa1acee0351497487e993913a0302b054a83c9ca876b69eb69e59a752f8192"
	I0915 07:13:06.403834   33084 cri.go:89] found id: "47184f847fb9cb6dbb9ea078aca39b32285cc7bfe9227f8cc205519b9f3e0d44"
	I0915 07:13:06.403837   33084 cri.go:89] found id: "fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4"
	I0915 07:13:06.403840   33084 cri.go:89] found id: "489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f"
	I0915 07:13:06.403843   33084 cri.go:89] found id: "606b9d6854130ea502e808cad6f11dd661e42dfaa3855c06ce4f49464137e7b5"
	I0915 07:13:06.403845   33084 cri.go:89] found id: "aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230"
	I0915 07:13:06.403847   33084 cri.go:89] found id: "b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0"
	I0915 07:13:06.403850   33084 cri.go:89] found id: "5733f96a0b004d9b54b8a20e477ff1601388cce8a8dba41a6a696e613170f071"
	I0915 07:13:06.403854   33084 cri.go:89] found id: "bcaf162e8fd08bf60d83f84f21cffe2d324498935459eb640d6286dfd00874cf"
	I0915 07:13:06.403857   33084 cri.go:89] found id: "bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b"
	I0915 07:13:06.403870   33084 cri.go:89] found id: "f3e8e75a7001798077598504da25ac6847fd389b64abaec0551104db70b588b6"
	I0915 07:13:06.403877   33084 cri.go:89] found id: "e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d"
	I0915 07:13:06.403882   33084 cri.go:89] found id: ""
	I0915 07:13:06.403931   33084 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.310453156Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3d06907-cba6-41a4-80a0-1c264f0c0e1c name=/runtime.v1.RuntimeService/Version
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.311924000Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1d25e146-c3fd-42de-8772-28504b2392ff name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.312517244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384685312492921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d25e146-c3fd-42de-8772-28504b2392ff name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.313022507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93d7e0ab-0e62-43c0-8bd1-5806ef4a82e2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.313096469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93d7e0ab-0e62-43c0-8bd1-5806ef4a82e2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.313562045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c68c2fdc35ba0bfd20d3ed14b58f7548ddb9ceaccb1a339416b2a2378953fc1,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726384465500311828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d41c86e84f1548904c69cb25b15c48a4ec0c7b1c10b5508d2174608b608f7e7,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726384432488462293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01401a2edbfbe0d194c0c88307d5861c53601db8979ec64157381aa45c5bfd2d,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726384430489729237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d2baaf7128462d2cba9d28f3373cf09d184c183b5ded7fa9fe4ad4f9ac35fe,PodSandboxId:4a7d5d0c0a6631b2e5a77fb3bee2ae57fbe77371ddc71ebce1815bcdff817ee3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726384423803361333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf88539ab3da9dc631babe1cb708a2b2f0a90a6cd85b4428b91830cd9cbac63,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726384420491423888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1100b50ab8e9c66344cdddcec22dbf54d91087b98ed6997d970fad28f0c8c9,PodSandboxId:fbc1bf98c6cf740157142a3ac2b5afdc983b7ee5d3cff21dfdecbde3ef204acc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726384402305600567,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36de90b173ab93803a5e185262634eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687491bc79a5922606800e4fe843e0bca1955764e539a7a6c2bb5a4eebfcf62a,PodSandboxId:ffdce8f4087e8051c9f2ef05faa1b0465b98c786b30a970730a3230dd2cf68a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726384391753328410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:2bed002dfeaaf8eed8611413ae113727c57e802c8e93b7716c8337ecc3a05346,PodSandboxId:e8c13324e475bfc88e5defb2df271652a5eaf717748cfd8dc1df499481199b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390640614852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425ce48c344f2a7bfd0483908c366e113d159d2756cac758c87cb75f3245a3d1,PodSandboxId:5f50b24b84ad147e67a4b8e8d0db8a0acf366f124e80a128642778664d333112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726384390551647180,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab831ce85fce75f7b9c6e0630e703b20697d11489e00c73f5ad1f8105a52723,PodSandboxId:e29b47453c420213876a5fc6535dd32f23ed733b58213313433936da9d5d1ec7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726384390505705121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d779b01c4db5393047ba7de2867b1075aa86f443e7203418e2ec025eb71d95b4,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726384390343659135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b860e6d04679a4b8e53fedbb24d6a47886501ba788ecebe75262b49c1428598b,PodSandboxId:1a4733e12bd720e84c5585401a1b5ea92eb1a32cdeaedf19c6b3814e41f76ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390509090854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35fe255a9da1007fa3bc6e442a97627b3685ba52cff7b34c92eebe87eae3f8ad,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726384390403259958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f
06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5509d991aebbe20258eaa37cc4f213711618ca7d721723e2d9e3e87e9740e488,PodSandboxId:1245558aaba5fc05dc94158ce14ce3ad729c9d32862dccd9fde8deb40fe6e798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726384390327562824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726383887395771547,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740121484176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740060699820,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726383727859440072,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726383727654087073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726383716294997339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726383716223595590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93d7e0ab-0e62-43c0-8bd1-5806ef4a82e2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.355390316Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=370a3d0f-0f86-4abe-9e96-78423dede4e7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.355767760Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4a7d5d0c0a6631b2e5a77fb3bee2ae57fbe77371ddc71ebce1815bcdff817ee3,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-rvbkj,Uid:bfdd465e-b855-4c44-b996-4e4f1e84b2f5,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726384423663240006,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:04:43.005558562Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fbc1bf98c6cf740157142a3ac2b5afdc983b7ee5d3cff21dfdecbde3ef204acc,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-670527,Uid:36de90b173ab93803a5e185262634eae,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726384402199360595,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36de90b173ab93803a5e185262634eae,},Annotations:map[string]string{kubernetes.io/config.hash: 36de90b173ab93803a5e185262634eae,kubernetes.io/config.seen: 2024-09-15T07:13:05.513737437Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1a4733e12bd720e84c5585401a1b5ea92eb1a32cdeaedf19c6b3814e41f76ba7,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-4w6x7,Uid:b61b0aa7-48e9-4746-b2e9-d205b96fe557,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726384389967396354,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-15T07:02:19.491724383Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:62afc380-282c-4392-9ff9-7531ab5e74d1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726384389939455217,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":
\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-15T07:02:19.485408063Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ffdce8f4087e8051c9f2ef05faa1b0465b98c786b30a970730a3230dd2cf68a7,Metadata:&PodSandboxMetadata{Name:kube-proxy-25xtk,Uid:c9955046-49ba-426d-9377-8d3e02fd3f37,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726384389932703652,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string
{kubernetes.io/config.seen: 2024-09-15T07:02:07.156541015Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-670527,Uid:5fc16d67028fdd4a0fa90a2ea4f901f1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726384389927986018,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5fc16d67028fdd4a0fa90a2ea4f901f1,kubernetes.io/config.seen: 2024-09-15T07:02:02.419033318Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e8c13324e475bfc88e5defb2df271652a5eaf717748cfd8dc1df499481199b4c,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-lpj44,Uid:a4a8f34c-c73f-411b-9773-18e
274a3987f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726384389914839519,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:02:19.493656823Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5f50b24b84ad147e67a4b8e8d0db8a0acf366f124e80a128642778664d333112,Metadata:&PodSandboxMetadata{Name:kindnet-6sqhd,Uid:8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726384389902571753,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,k8s-app: kindnet,pod-template
-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:02:07.164569704Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-670527,Uid:b7944bf4798d5d90f06f97fb5e8af6cc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726384389899942245,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.54:8443,kubernetes.io/config.hash: b7944bf4798d5d90f06f97fb5e8af6cc,kubernetes.io/config.seen: 2024-09-15T07:02:02.419031955Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1245558aaba5fc05dc94158ce14ce3ad729c9d328
62dccd9fde8deb40fe6e798,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-670527,Uid:1cd3623787bdcf21704486a0ac04d42d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726384389899020247,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1cd3623787bdcf21704486a0ac04d42d,kubernetes.io/config.seen: 2024-09-15T07:02:02.419034425Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e29b47453c420213876a5fc6535dd32f23ed733b58213313433936da9d5d1ec7,Metadata:&PodSandboxMetadata{Name:etcd-ha-670527,Uid:b27ffa51cd638ae82bad7902bf528411,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726384389880614006,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-670527,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.54:2379,kubernetes.io/config.hash: b27ffa51cd638ae82bad7902bf528411,kubernetes.io/config.seen: 2024-09-15T07:02:02.419028168Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-rvbkj,Uid:bfdd465e-b855-4c44-b996-4e4f1e84b2f5,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726383884222517617,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:04:43.005558562Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},&PodSandbox{Id:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-lpj44,Uid:a4a8f34c-c73f-411b-9773-18e274a3987f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726383739803436158,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:02:19.493656823Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-4w6x7,Uid:b61b0aa7-48e9-4746-b2e9-d205b96fe557,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726383739797093828,Labels:map[string]string{io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:02:19.491724383Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&PodSandboxMetadata{Name:kube-proxy-25xtk,Uid:c9955046-49ba-426d-9377-8d3e02fd3f37,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726383727475674467,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:02:07.156541015Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&Po
dSandbox{Id:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&PodSandboxMetadata{Name:kindnet-6sqhd,Uid:8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726383727471514678,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:02:07.164569704Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-670527,Uid:1cd3623787bdcf21704486a0ac04d42d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726383716066460772,Labels:map[string]string{component: kube-scheduler,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1cd3623787bdcf21704486a0ac04d42d,kubernetes.io/config.seen: 2024-09-15T07:01:55.561953803Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&PodSandboxMetadata{Name:etcd-ha-670527,Uid:b27ffa51cd638ae82bad7902bf528411,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726383716030239929,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.54:2379,kubernetes.io/config.hash: b27ffa51cd6
38ae82bad7902bf528411,kubernetes.io/config.seen: 2024-09-15T07:01:55.561947063Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=370a3d0f-0f86-4abe-9e96-78423dede4e7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.356743309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d41002a-4ad7-4c02-b8f3-2bf825ad2bb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.356805804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d41002a-4ad7-4c02-b8f3-2bf825ad2bb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.357495706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c68c2fdc35ba0bfd20d3ed14b58f7548ddb9ceaccb1a339416b2a2378953fc1,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726384465500311828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d41c86e84f1548904c69cb25b15c48a4ec0c7b1c10b5508d2174608b608f7e7,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726384432488462293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01401a2edbfbe0d194c0c88307d5861c53601db8979ec64157381aa45c5bfd2d,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726384430489729237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d2baaf7128462d2cba9d28f3373cf09d184c183b5ded7fa9fe4ad4f9ac35fe,PodSandboxId:4a7d5d0c0a6631b2e5a77fb3bee2ae57fbe77371ddc71ebce1815bcdff817ee3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726384423803361333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf88539ab3da9dc631babe1cb708a2b2f0a90a6cd85b4428b91830cd9cbac63,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726384420491423888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1100b50ab8e9c66344cdddcec22dbf54d91087b98ed6997d970fad28f0c8c9,PodSandboxId:fbc1bf98c6cf740157142a3ac2b5afdc983b7ee5d3cff21dfdecbde3ef204acc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726384402305600567,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36de90b173ab93803a5e185262634eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687491bc79a5922606800e4fe843e0bca1955764e539a7a6c2bb5a4eebfcf62a,PodSandboxId:ffdce8f4087e8051c9f2ef05faa1b0465b98c786b30a970730a3230dd2cf68a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726384391753328410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:2bed002dfeaaf8eed8611413ae113727c57e802c8e93b7716c8337ecc3a05346,PodSandboxId:e8c13324e475bfc88e5defb2df271652a5eaf717748cfd8dc1df499481199b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390640614852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425ce48c344f2a7bfd0483908c366e113d159d2756cac758c87cb75f3245a3d1,PodSandboxId:5f50b24b84ad147e67a4b8e8d0db8a0acf366f124e80a128642778664d333112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726384390551647180,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab831ce85fce75f7b9c6e0630e703b20697d11489e00c73f5ad1f8105a52723,PodSandboxId:e29b47453c420213876a5fc6535dd32f23ed733b58213313433936da9d5d1ec7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726384390505705121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d779b01c4db5393047ba7de2867b1075aa86f443e7203418e2ec025eb71d95b4,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726384390343659135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b860e6d04679a4b8e53fedbb24d6a47886501ba788ecebe75262b49c1428598b,PodSandboxId:1a4733e12bd720e84c5585401a1b5ea92eb1a32cdeaedf19c6b3814e41f76ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390509090854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35fe255a9da1007fa3bc6e442a97627b3685ba52cff7b34c92eebe87eae3f8ad,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726384390403259958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f
06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5509d991aebbe20258eaa37cc4f213711618ca7d721723e2d9e3e87e9740e488,PodSandboxId:1245558aaba5fc05dc94158ce14ce3ad729c9d32862dccd9fde8deb40fe6e798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726384390327562824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726383887395771547,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740121484176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740060699820,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726383727859440072,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726383727654087073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726383716294997339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726383716223595590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d41002a-4ad7-4c02-b8f3-2bf825ad2bb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.362335562Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f8e1b20-5b63-4f18-ae58-3d5a35e7585d name=/runtime.v1.RuntimeService/Version
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.362394521Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f8e1b20-5b63-4f18-ae58-3d5a35e7585d name=/runtime.v1.RuntimeService/Version
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.366531435Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45b0534d-6fc6-41f8-bd20-efb9b22dca2e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.366938435Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384685366919128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45b0534d-6fc6-41f8-bd20-efb9b22dca2e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.367618628Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8cf71b65-3b35-4ccf-b1f0-caef23895b09 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.367674052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8cf71b65-3b35-4ccf-b1f0-caef23895b09 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.368104180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c68c2fdc35ba0bfd20d3ed14b58f7548ddb9ceaccb1a339416b2a2378953fc1,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726384465500311828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d41c86e84f1548904c69cb25b15c48a4ec0c7b1c10b5508d2174608b608f7e7,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726384432488462293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01401a2edbfbe0d194c0c88307d5861c53601db8979ec64157381aa45c5bfd2d,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726384430489729237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d2baaf7128462d2cba9d28f3373cf09d184c183b5ded7fa9fe4ad4f9ac35fe,PodSandboxId:4a7d5d0c0a6631b2e5a77fb3bee2ae57fbe77371ddc71ebce1815bcdff817ee3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726384423803361333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf88539ab3da9dc631babe1cb708a2b2f0a90a6cd85b4428b91830cd9cbac63,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726384420491423888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1100b50ab8e9c66344cdddcec22dbf54d91087b98ed6997d970fad28f0c8c9,PodSandboxId:fbc1bf98c6cf740157142a3ac2b5afdc983b7ee5d3cff21dfdecbde3ef204acc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726384402305600567,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36de90b173ab93803a5e185262634eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687491bc79a5922606800e4fe843e0bca1955764e539a7a6c2bb5a4eebfcf62a,PodSandboxId:ffdce8f4087e8051c9f2ef05faa1b0465b98c786b30a970730a3230dd2cf68a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726384391753328410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:2bed002dfeaaf8eed8611413ae113727c57e802c8e93b7716c8337ecc3a05346,PodSandboxId:e8c13324e475bfc88e5defb2df271652a5eaf717748cfd8dc1df499481199b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390640614852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425ce48c344f2a7bfd0483908c366e113d159d2756cac758c87cb75f3245a3d1,PodSandboxId:5f50b24b84ad147e67a4b8e8d0db8a0acf366f124e80a128642778664d333112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726384390551647180,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab831ce85fce75f7b9c6e0630e703b20697d11489e00c73f5ad1f8105a52723,PodSandboxId:e29b47453c420213876a5fc6535dd32f23ed733b58213313433936da9d5d1ec7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726384390505705121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d779b01c4db5393047ba7de2867b1075aa86f443e7203418e2ec025eb71d95b4,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726384390343659135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b860e6d04679a4b8e53fedbb24d6a47886501ba788ecebe75262b49c1428598b,PodSandboxId:1a4733e12bd720e84c5585401a1b5ea92eb1a32cdeaedf19c6b3814e41f76ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390509090854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35fe255a9da1007fa3bc6e442a97627b3685ba52cff7b34c92eebe87eae3f8ad,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726384390403259958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f
06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5509d991aebbe20258eaa37cc4f213711618ca7d721723e2d9e3e87e9740e488,PodSandboxId:1245558aaba5fc05dc94158ce14ce3ad729c9d32862dccd9fde8deb40fe6e798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726384390327562824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726383887395771547,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740121484176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740060699820,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726383727859440072,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726383727654087073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726383716294997339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726383716223595590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8cf71b65-3b35-4ccf-b1f0-caef23895b09 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.410341844Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2fb84bc1-1c8f-49a8-b70e-ddb4482b29a9 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.410499238Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2fb84bc1-1c8f-49a8-b70e-ddb4482b29a9 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.411479623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72b95ca7-c354-4057-96d9-d7fb28129a71 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.411956816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384685411931499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72b95ca7-c354-4057-96d9-d7fb28129a71 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.412438376Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9d9d11c-6a33-4c87-86d2-b9048719c8b9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.412512910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9d9d11c-6a33-4c87-86d2-b9048719c8b9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:18:05 ha-670527 crio[3570]: time="2024-09-15 07:18:05.413862081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c68c2fdc35ba0bfd20d3ed14b58f7548ddb9ceaccb1a339416b2a2378953fc1,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726384465500311828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d41c86e84f1548904c69cb25b15c48a4ec0c7b1c10b5508d2174608b608f7e7,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726384432488462293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01401a2edbfbe0d194c0c88307d5861c53601db8979ec64157381aa45c5bfd2d,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726384430489729237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d2baaf7128462d2cba9d28f3373cf09d184c183b5ded7fa9fe4ad4f9ac35fe,PodSandboxId:4a7d5d0c0a6631b2e5a77fb3bee2ae57fbe77371ddc71ebce1815bcdff817ee3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726384423803361333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf88539ab3da9dc631babe1cb708a2b2f0a90a6cd85b4428b91830cd9cbac63,PodSandboxId:a942053d741b5ea6b69c548b3fbfa87a5f6175a563c76df4f61cd56e68511379,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726384420491423888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62afc380-282c-4392-9ff9-7531ab5e74d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1100b50ab8e9c66344cdddcec22dbf54d91087b98ed6997d970fad28f0c8c9,PodSandboxId:fbc1bf98c6cf740157142a3ac2b5afdc983b7ee5d3cff21dfdecbde3ef204acc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726384402305600567,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36de90b173ab93803a5e185262634eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687491bc79a5922606800e4fe843e0bca1955764e539a7a6c2bb5a4eebfcf62a,PodSandboxId:ffdce8f4087e8051c9f2ef05faa1b0465b98c786b30a970730a3230dd2cf68a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726384391753328410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:2bed002dfeaaf8eed8611413ae113727c57e802c8e93b7716c8337ecc3a05346,PodSandboxId:e8c13324e475bfc88e5defb2df271652a5eaf717748cfd8dc1df499481199b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390640614852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425ce48c344f2a7bfd0483908c366e113d159d2756cac758c87cb75f3245a3d1,PodSandboxId:5f50b24b84ad147e67a4b8e8d0db8a0acf366f124e80a128642778664d333112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726384390551647180,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab831ce85fce75f7b9c6e0630e703b20697d11489e00c73f5ad1f8105a52723,PodSandboxId:e29b47453c420213876a5fc6535dd32f23ed733b58213313433936da9d5d1ec7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726384390505705121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d779b01c4db5393047ba7de2867b1075aa86f443e7203418e2ec025eb71d95b4,PodSandboxId:872e08605b4151c8d9f1c4ff30337156069c35b2241ce6924e806434a0edb902,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726384390343659135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fc16d67028fdd4a0fa90a2ea4f901f1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b860e6d04679a4b8e53fedbb24d6a47886501ba788ecebe75262b49c1428598b,PodSandboxId:1a4733e12bd720e84c5585401a1b5ea92eb1a32cdeaedf19c6b3814e41f76ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726384390509090854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35fe255a9da1007fa3bc6e442a97627b3685ba52cff7b34c92eebe87eae3f8ad,PodSandboxId:290c44145bc70aceef6948cae92b8e7def89b5cc7182485b70b660c557271ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726384390403259958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7944bf4798d5d90f
06f97fb5e8af6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5509d991aebbe20258eaa37cc4f213711618ca7d721723e2d9e3e87e9740e488,PodSandboxId:1245558aaba5fc05dc94158ce14ce3ad729c9d32862dccd9fde8deb40fe6e798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726384390327562824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d6d31c8606ffc38f26b33e8760bc2e29df5353304968a4c8a6475f881a1f6e6,PodSandboxId:ee2d1970f1e785a5bffc5a170f1b68055b299fed477db3b8a6aa538145af14b9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726383887395771547,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rvbkj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfdd465e-b855-4c44-b996-4e4f1e84b2f5,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4,PodSandboxId:f7b4d1299c815f751a84c6549a08edac0f200502280ef3c8adf67b2000a11e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740121484176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lpj44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a8f34c-c73f-411b-9773-18e274a3987f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f,PodSandboxId:6f3bebb3d80d806709d3aa43b8143b0bb0a17cd900a4ece25b16a2dee41e0033,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726383740060699820,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4w6x7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61b0aa7-48e9-4746-b2e9-d205b96fe557,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230,PodSandboxId:843991b56a260c8254073a232a93ed9550a89fb2eee213b9f7d3634faaa26fa5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726383727859440072,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6sqhd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b26b7c0-b2a0-4c3c-a492-a8c616e9b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0,PodSandboxId:594c62a0375e6fbaf9a8fddee37ca18ccebcf854aef4925a48e4b661206351f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726383727654087073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25xtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9955046-49ba-426d-9377-8d3e02fd3f37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b,PodSandboxId:6e7b02c328479f7a3b0c2e6e5ad994f4a750e39dd8ea49e04f8b5b63f8a41553,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726383716294997339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27ffa51cd638ae82bad7902bf528411,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d,PodSandboxId:58967292ecf375639442c64d828d41df9672e86bb94b3c1f805d4a6177d69973,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726383716223595590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-670527,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd3623787bdcf21704486a0ac04d42d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9d9d11c-6a33-4c87-86d2-b9048719c8b9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9c68c2fdc35ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   a942053d741b5       storage-provisioner
	5d41c86e84f15       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   872e08605b415       kube-controller-manager-ha-670527
	01401a2edbfbe       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   290c44145bc70       kube-apiserver-ha-670527
	a6d2baaf71284       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   4a7d5d0c0a663       busybox-7dff88458-rvbkj
	acf88539ab3da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   a942053d741b5       storage-provisioner
	1c1100b50ab8e       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   fbc1bf98c6cf7       kube-vip-ha-670527
	687491bc79a59       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   ffdce8f4087e8       kube-proxy-25xtk
	2bed002dfeaaf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   e8c13324e475b       coredns-7c65d6cfc9-lpj44
	425ce48c344f2       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   5f50b24b84ad1       kindnet-6sqhd
	b860e6d04679a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   1a4733e12bd72       coredns-7c65d6cfc9-4w6x7
	8ab831ce85fce       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   e29b47453c420       etcd-ha-670527
	35fe255a9da10       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Exited              kube-apiserver            2                   290c44145bc70       kube-apiserver-ha-670527
	d779b01c4db53       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Exited              kube-controller-manager   1                   872e08605b415       kube-controller-manager-ha-670527
	5509d991aebbe       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   1245558aaba5f       kube-scheduler-ha-670527
	1d6d31c8606ff       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   ee2d1970f1e78       busybox-7dff88458-rvbkj
	fde41666d8c29       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   f7b4d1299c815       coredns-7c65d6cfc9-lpj44
	489cc4a0fb63e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   6f3bebb3d80d8       coredns-7c65d6cfc9-4w6x7
	aa6d2372c6ae3       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      15 minutes ago      Exited              kindnet-cni               0                   843991b56a260       kindnet-6sqhd
	b75dfe3b6121c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      15 minutes ago      Exited              kube-proxy                0                   594c62a0375e6       kube-proxy-25xtk
	bbb55bff5eb6c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   6e7b02c328479       etcd-ha-670527
	e3475f73ce55b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      16 minutes ago      Exited              kube-scheduler            0                   58967292ecf37       kube-scheduler-ha-670527
	
	
	==> coredns [2bed002dfeaaf8eed8611413ae113727c57e802c8e93b7716c8337ecc3a05346] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:56474->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:56474->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35284->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35284->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [489cc4a0fb63e6b584964969a6726a8d7185d88f39dd1e3a241d82c0e615bd1f] <==
	[INFO] 10.244.2.2:37125 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000203403s
	[INFO] 10.244.2.2:50161 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165008s
	[INFO] 10.244.2.2:56879 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.01294413s
	[INFO] 10.244.2.2:45083 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125708s
	[INFO] 10.244.1.2:52633 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197932s
	[INFO] 10.244.1.2:50573 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001770125s
	[INFO] 10.244.1.2:35701 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180854s
	[INFO] 10.244.1.2:41389 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132037s
	[INFO] 10.244.1.2:58202 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183842s
	[INFO] 10.244.1.2:49817 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109611s
	[INFO] 10.244.0.4:52793 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113159s
	[INFO] 10.244.0.4:38656 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106719s
	[INFO] 10.244.0.4:38122 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061858s
	[INFO] 10.244.2.2:46127 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114243s
	[INFO] 10.244.1.2:54602 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101327s
	[INFO] 10.244.1.2:55582 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124623s
	[INFO] 10.244.0.4:55917 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104871s
	[INFO] 10.244.2.2:41069 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001913s
	[INFO] 10.244.1.2:58958 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140612s
	[INFO] 10.244.1.2:39608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00015189s
	[INFO] 10.244.1.2:40627 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154411s
	[INFO] 10.244.0.4:53377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121746s
	[INFO] 10.244.0.4:52578 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089133s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b860e6d04679a4b8e53fedbb24d6a47886501ba788ecebe75262b49c1428598b] <==
	[INFO] plugin/kubernetes: Trace[873210321]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 07:13:17.056) (total time: 10001ms):
	Trace[873210321]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:13:27.058)
	Trace[873210321]: [10.001536302s] [10.001536302s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[59895605]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 07:13:19.221) (total time: 10001ms):
	Trace[59895605]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (07:13:29.222)
	Trace[59895605]: [10.001009294s] [10.001009294s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37852->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37852->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fde41666d8c295842c8ed13eaa6ea6c886e7c1e9b0a000d674f727b06a41b3d4] <==
	[INFO] 10.244.2.2:33194 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170028s
	[INFO] 10.244.2.2:40376 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000180655s
	[INFO] 10.244.1.2:52585 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001449553s
	[INFO] 10.244.1.2:53060 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000208928s
	[INFO] 10.244.0.4:56755 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001727309s
	[INFO] 10.244.0.4:60825 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000234694s
	[INFO] 10.244.0.4:58873 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001046398s
	[INFO] 10.244.0.4:42322 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104256s
	[INFO] 10.244.0.4:34109 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000038552s
	[INFO] 10.244.2.2:60809 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124458s
	[INFO] 10.244.2.2:36825 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093407s
	[INFO] 10.244.2.2:56100 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075616s
	[INFO] 10.244.1.2:47124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122782s
	[INFO] 10.244.1.2:55965 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096943s
	[INFO] 10.244.0.4:34915 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120044s
	[INFO] 10.244.0.4:43696 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073334s
	[INFO] 10.244.0.4:59415 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158827s
	[INFO] 10.244.2.2:35148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177137s
	[INFO] 10.244.2.2:58466 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166358s
	[INFO] 10.244.2.2:60740 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000304437s
	[INFO] 10.244.1.2:54984 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149622s
	[INFO] 10.244.0.4:44476 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075065s
	[INFO] 10.244.0.4:37204 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054807s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-670527
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T07_02_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:02:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:18:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:14:00 +0000   Sun, 15 Sep 2024 07:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:14:00 +0000   Sun, 15 Sep 2024 07:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:14:00 +0000   Sun, 15 Sep 2024 07:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:14:00 +0000   Sun, 15 Sep 2024 07:02:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    ha-670527
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4352c21da1154e49b4f2cd8223ef4f22
	  System UUID:                4352c21d-a115-4e49-b4f2-cd8223ef4f22
	  Boot ID:                    28f13bdf-c0fc-4804-9eaa-c62790060557
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rvbkj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-4w6x7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-lpj44             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-670527                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-6sqhd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-670527             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-670527    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-25xtk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-670527             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-670527                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m12s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-670527 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-670527 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-670527 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                    node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-670527 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	  Normal   NodeNotReady             5m19s (x2 over 5m44s)  kubelet          Node ha-670527 status is now: NodeNotReady
	  Warning  ContainerGCFailed        5m3s (x2 over 6m3s)    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m9s                   node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	  Normal   RegisteredNode           4m7s                   node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	  Normal   RegisteredNode           3m19s                  node-controller  Node ha-670527 event: Registered Node ha-670527 in Controller
	
	
	Name:               ha-670527-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_02_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:02:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:18:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:16:50 +0000   Sun, 15 Sep 2024 07:16:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:16:50 +0000   Sun, 15 Sep 2024 07:16:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:16:50 +0000   Sun, 15 Sep 2024 07:16:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:16:50 +0000   Sun, 15 Sep 2024 07:16:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-670527-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 937badb420fd46bab8c9040c7d7b213d
	  System UUID:                937badb4-20fd-46ba-b8c9-040c7d7b213d
	  Boot ID:                    5a3946cf-1742-47d7-b935-730fee807ecb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gxwp9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-670527-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-mn54b                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-670527-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-670527-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-kt79t                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-670527-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-670527-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x3 over 15m)      kubelet          Node ha-670527-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x3 over 15m)      kubelet          Node ha-670527-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x3 over 15m)      kubelet          Node ha-670527-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  NodeReady                14m                    kubelet          Node ha-670527-m02 status is now: NodeReady
	  Normal  RegisteredNode           13m                    node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-670527-m02 status is now: NodeNotReady
	  Normal  Starting                 4m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m37s (x8 over 4m37s)  kubelet          Node ha-670527-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m37s (x8 over 4m37s)  kubelet          Node ha-670527-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m37s (x7 over 4m37s)  kubelet          Node ha-670527-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-670527-m02 event: Registered Node ha-670527-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-670527-m02 status is now: NodeNotReady
	
	
	Name:               ha-670527-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-670527-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-670527
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_05_24_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:05:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-670527-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:15:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 15 Sep 2024 07:15:18 +0000   Sun, 15 Sep 2024 07:16:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 15 Sep 2024 07:15:18 +0000   Sun, 15 Sep 2024 07:16:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 15 Sep 2024 07:15:18 +0000   Sun, 15 Sep 2024 07:16:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 15 Sep 2024 07:15:18 +0000   Sun, 15 Sep 2024 07:16:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-670527-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 24d136e447f34c399b15050eaf7b094c
	  System UUID:                24d136e4-47f3-4c39-9b15-050eaf7b094c
	  Boot ID:                    228ee5dc-0839-4c73-837a-a187890e2319
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kv6sd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-4l8cf              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-fq2lt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-670527-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-670527-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-670527-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-670527-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m9s                   node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal   RegisteredNode           4m7s                   node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Normal   RegisteredNode           3m19s                  node-controller  Node ha-670527-m04 event: Registered Node ha-670527-m04 in Controller
	  Warning  Rebooted                 2m47s (x3 over 2m47s)  kubelet          Node ha-670527-m04 has been rebooted, boot id: 228ee5dc-0839-4c73-837a-a187890e2319
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m47s (x4 over 2m47s)  kubelet          Node ha-670527-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x4 over 2m47s)  kubelet          Node ha-670527-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x4 over 2m47s)  kubelet          Node ha-670527-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m47s                  kubelet          Node ha-670527-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m47s (x2 over 2m47s)  kubelet          Node ha-670527-m04 status is now: NodeReady
	  Normal   NodeNotReady             104s (x2 over 3m29s)   node-controller  Node ha-670527-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.438155] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.055236] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053736] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.163785] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.149779] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.293443] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.937975] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.762754] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.062847] kauditd_printk_skb: 158 callbacks suppressed
	[Sep15 07:02] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.107092] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.313948] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.323190] kauditd_printk_skb: 38 callbacks suppressed
	[Sep15 07:03] kauditd_printk_skb: 26 callbacks suppressed
	[Sep15 07:13] systemd-fstab-generator[3495]: Ignoring "noauto" option for root device
	[  +0.154106] systemd-fstab-generator[3507]: Ignoring "noauto" option for root device
	[  +0.170714] systemd-fstab-generator[3521]: Ignoring "noauto" option for root device
	[  +0.148530] systemd-fstab-generator[3533]: Ignoring "noauto" option for root device
	[  +0.278100] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[  +0.716835] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +4.332313] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.209534] kauditd_printk_skb: 97 callbacks suppressed
	[ +35.416650] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [8ab831ce85fce75f7b9c6e0630e703b20697d11489e00c73f5ad1f8105a52723] <==
	{"level":"info","ts":"2024-09-15T07:14:43.230421Z","caller":"traceutil/trace.go:171","msg":"trace[1819949652] transaction","detail":"{read_only:false; response_revision:2352; number_of_response:1; }","duration":"186.956008ms","start":"2024-09-15T07:14:43.043450Z","end":"2024-09-15T07:14:43.230406Z","steps":["trace[1819949652] 'process raft request'  (duration: 184.840567ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:15:21.687927Z","caller":"traceutil/trace.go:171","msg":"trace[404736881] linearizableReadLoop","detail":"{readStateIndex:2938; appliedIndex:2938; }","duration":"122.250418ms","start":"2024-09-15T07:15:21.565658Z","end":"2024-09-15T07:15:21.687908Z","steps":["trace[404736881] 'read index received'  (duration: 122.24227ms)","trace[404736881] 'applied index is now lower than readState.Index'  (duration: 6.924µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T07:15:21.688186Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.457504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-15T07:15:21.688213Z","caller":"traceutil/trace.go:171","msg":"trace[2033213668] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:2518; }","duration":"122.567825ms","start":"2024-09-15T07:15:21.565638Z","end":"2024-09-15T07:15:21.688205Z","steps":["trace[2033213668] 'agreement among raft nodes before linearized reading'  (duration: 122.434311ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:15:21.688294Z","caller":"traceutil/trace.go:171","msg":"trace[1968903699] transaction","detail":"{read_only:false; response_revision:2519; number_of_response:1; }","duration":"181.555184ms","start":"2024-09-15T07:15:21.506724Z","end":"2024-09-15T07:15:21.688280Z","steps":["trace[1968903699] 'process raft request'  (duration: 181.209887ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:15:25.881449Z","caller":"traceutil/trace.go:171","msg":"trace[370581060] transaction","detail":"{read_only:false; response_revision:2536; number_of_response:1; }","duration":"172.483275ms","start":"2024-09-15T07:15:25.708951Z","end":"2024-09-15T07:15:25.881434Z","steps":["trace[370581060] 'process raft request'  (duration: 172.394559ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T07:15:32.096708Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.4:36256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-09-15T07:15:32.108574Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.4:36264","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-09-15T07:15:32.131011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 switched to configuration voters=(8295450472155669015 17810692791377849512)"}
	{"level":"info","ts":"2024-09-15T07:15:32.133678Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"ad335f297da439ca","local-member-id":"731f5c40d4af6217","removed-remote-peer-id":"f153910e35189484","removed-remote-peer-urls":["https://192.168.39.4:2380"]}
	{"level":"info","ts":"2024-09-15T07:15:32.133787Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f153910e35189484"}
	{"level":"warn","ts":"2024-09-15T07:15:32.133905Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:15:32.134018Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f153910e35189484"}
	{"level":"warn","ts":"2024-09-15T07:15:32.134438Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:15:32.134596Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:15:32.134689Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484"}
	{"level":"warn","ts":"2024-09-15T07:15:32.135030Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484","error":"context canceled"}
	{"level":"warn","ts":"2024-09-15T07:15:32.135180Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"f153910e35189484","error":"failed to read f153910e35189484 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-15T07:15:32.135251Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484"}
	{"level":"warn","ts":"2024-09-15T07:15:32.135783Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484","error":"context canceled"}
	{"level":"info","ts":"2024-09-15T07:15:32.135964Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:15:32.136046Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:15:32.136214Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"731f5c40d4af6217","removed-remote-peer-id":"f153910e35189484"}
	{"level":"warn","ts":"2024-09-15T07:15:32.145585Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"731f5c40d4af6217","remote-peer-id-stream-handler":"731f5c40d4af6217","remote-peer-id-from":"f153910e35189484"}
	{"level":"warn","ts":"2024-09-15T07:15:32.148632Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"731f5c40d4af6217","remote-peer-id-stream-handler":"731f5c40d4af6217","remote-peer-id-from":"f153910e35189484"}
	
	
	==> etcd [bbb55bff5eb6ce414fedcf5827a042772cff4140ae7c099616220f87feb0ba9b] <==
	{"level":"warn","ts":"2024-09-15T07:11:32.600880Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-15T07:11:31.725305Z","time spent":"875.566966ms","remote":"127.0.0.1:33056","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:10000 "}
	2024/09/15 07:11:32 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-15T07:11:32.633361Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.54:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T07:11:32.633415Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.54:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-15T07:11:32.635109Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"731f5c40d4af6217","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-15T07:11:32.635500Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f72c4aba89afeca8"}
	{"level":"info","ts":"2024-09-15T07:11:32.635612Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f72c4aba89afeca8"}
	{"level":"info","ts":"2024-09-15T07:11:32.635721Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f72c4aba89afeca8"}
	{"level":"info","ts":"2024-09-15T07:11:32.635936Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8"}
	{"level":"info","ts":"2024-09-15T07:11:32.635993Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8"}
	{"level":"info","ts":"2024-09-15T07:11:32.636039Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f72c4aba89afeca8"}
	{"level":"info","ts":"2024-09-15T07:11:32.636103Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f72c4aba89afeca8"}
	{"level":"info","ts":"2024-09-15T07:11:32.636111Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:11:32.636167Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:11:32.636229Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:11:32.636308Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:11:32.636357Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:11:32.636405Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:11:32.636447Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f153910e35189484"}
	{"level":"info","ts":"2024-09-15T07:11:32.638849Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"warn","ts":"2024-09-15T07:11:32.638939Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.923636605s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-15T07:11:32.638979Z","caller":"traceutil/trace.go:171","msg":"trace[1657840785] range","detail":"{range_begin:; range_end:; }","duration":"8.923689869s","start":"2024-09-15T07:11:23.715281Z","end":"2024-09-15T07:11:32.638971Z","steps":["trace[1657840785] 'agreement among raft nodes before linearized reading'  (duration: 8.923635149s)"],"step_count":1}
	{"level":"error","ts":"2024-09-15T07:11:32.639036Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-15T07:11:32.639591Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-09-15T07:11:32.639675Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-670527","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.54:2380"],"advertise-client-urls":["https://192.168.39.54:2379"]}
	
	
	==> kernel <==
	 07:18:06 up 16 min,  0 users,  load average: 0.47, 0.75, 0.47
	Linux ha-670527 5.10.207 #1 SMP Sun Sep 15 04:48:27 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [425ce48c344f2a7bfd0483908c366e113d159d2756cac758c87cb75f3245a3d1] <==
	I0915 07:17:21.827618       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:17:31.828453       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:17:31.828568       1 main.go:299] handling current node
	I0915 07:17:31.828605       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:17:31.828624       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:17:31.828806       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:17:31.828858       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:17:41.828882       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:17:41.828996       1 main.go:299] handling current node
	I0915 07:17:41.829060       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:17:41.829087       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:17:41.829318       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:17:41.829355       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:17:51.830856       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:17:51.830907       1 main.go:299] handling current node
	I0915 07:17:51.830925       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:17:51.830934       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:17:51.831106       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:17:51.831194       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:18:01.826303       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:18:01.826368       1 main.go:299] handling current node
	I0915 07:18:01.826397       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:18:01.826403       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:18:01.827498       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:18:01.827534       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [aa6d2372c6ae3a2d9bbcd263677a132d24ea962a88535f66fd2bba8851b27230] <==
	I0915 07:11:09.120706       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:11:09.120826       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:11:09.120994       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:11:09.121046       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:11:09.121261       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:11:09.121358       1 main.go:299] handling current node
	I0915 07:11:09.121389       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:11:09.121439       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:11:19.119244       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:11:19.119328       1 main.go:299] handling current node
	I0915 07:11:19.119358       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:11:19.119396       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:11:19.119554       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:11:19.119578       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:11:19.119640       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:11:19.119658       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:11:29.122264       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0915 07:11:29.122346       1 main.go:322] Node ha-670527-m02 has CIDR [10.244.1.0/24] 
	I0915 07:11:29.122502       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0915 07:11:29.122525       1 main.go:322] Node ha-670527-m03 has CIDR [10.244.2.0/24] 
	I0915 07:11:29.122582       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0915 07:11:29.122601       1 main.go:322] Node ha-670527-m04 has CIDR [10.244.3.0/24] 
	I0915 07:11:29.122682       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0915 07:11:29.122702       1 main.go:299] handling current node
	E0915 07:11:30.712956       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	
	
	==> kube-apiserver [01401a2edbfbe0d194c0c88307d5861c53601db8979ec64157381aa45c5bfd2d] <==
	I0915 07:13:52.361040       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0915 07:13:52.441005       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0915 07:13:52.445919       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0915 07:13:52.445962       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0915 07:13:52.446055       1 shared_informer.go:320] Caches are synced for configmaps
	I0915 07:13:52.446321       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0915 07:13:52.451445       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0915 07:13:52.451481       1 policy_source.go:224] refreshing policies
	I0915 07:13:52.451587       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0915 07:13:52.452027       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0915 07:13:52.459653       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0915 07:13:52.459680       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0915 07:13:52.461304       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0915 07:13:52.461393       1 aggregator.go:171] initial CRD sync complete...
	I0915 07:13:52.461411       1 autoregister_controller.go:144] Starting autoregister controller
	I0915 07:13:52.461417       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0915 07:13:52.461421       1 cache.go:39] Caches are synced for autoregister controller
	W0915 07:13:52.461800       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4]
	I0915 07:13:52.464636       1 controller.go:615] quota admission added evaluator for: endpoints
	I0915 07:13:52.476675       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0915 07:13:52.490072       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0915 07:13:52.533735       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 07:13:53.360206       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0915 07:13:53.910970       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4 192.168.39.54]
	W0915 07:15:43.912753       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.222 192.168.39.54]
	
	
	==> kube-apiserver [35fe255a9da1007fa3bc6e442a97627b3685ba52cff7b34c92eebe87eae3f8ad] <==
	I0915 07:13:11.097204       1 options.go:228] external host was not specified, using 192.168.39.54
	I0915 07:13:11.102400       1 server.go:142] Version: v1.31.1
	I0915 07:13:11.102513       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:13:11.718462       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0915 07:13:11.732279       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0915 07:13:11.744989       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0915 07:13:11.745092       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0915 07:13:11.745733       1 instance.go:232] Using reconciler: lease
	W0915 07:13:31.718734       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0915 07:13:31.718734       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0915 07:13:31.747492       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [5d41c86e84f1548904c69cb25b15c48a4ec0c7b1c10b5508d2174608b608f7e7] <==
	I0915 07:16:34.059670       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m04"
	E0915 07:16:36.199493       1 gc_controller.go:151] "Failed to get node" err="node \"ha-670527-m03\" not found" logger="pod-garbage-collector-controller" node="ha-670527-m03"
	E0915 07:16:36.199542       1 gc_controller.go:151] "Failed to get node" err="node \"ha-670527-m03\" not found" logger="pod-garbage-collector-controller" node="ha-670527-m03"
	E0915 07:16:36.199550       1 gc_controller.go:151] "Failed to get node" err="node \"ha-670527-m03\" not found" logger="pod-garbage-collector-controller" node="ha-670527-m03"
	E0915 07:16:36.199614       1 gc_controller.go:151] "Failed to get node" err="node \"ha-670527-m03\" not found" logger="pod-garbage-collector-controller" node="ha-670527-m03"
	E0915 07:16:36.199619       1 gc_controller.go:151] "Failed to get node" err="node \"ha-670527-m03\" not found" logger="pod-garbage-collector-controller" node="ha-670527-m03"
	I0915 07:16:36.212631       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-670527-m03"
	I0915 07:16:36.239550       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-670527-m03"
	I0915 07:16:36.239648       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-670527-m03"
	I0915 07:16:36.267202       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-670527-m03"
	I0915 07:16:36.267284       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mbcxc"
	I0915 07:16:36.323399       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mbcxc"
	I0915 07:16:36.323483       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-670527-m03"
	I0915 07:16:36.358548       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-670527-m03"
	I0915 07:16:36.358588       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-670527-m03"
	I0915 07:16:36.389050       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-670527-m03"
	I0915 07:16:36.389193       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-670527-m03"
	I0915 07:16:36.423334       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-670527-m03"
	I0915 07:16:36.423528       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fcgbj"
	I0915 07:16:36.451029       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fcgbj"
	I0915 07:16:48.831675       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.176678ms"
	I0915 07:16:48.831769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.216µs"
	I0915 07:16:50.887992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m02"
	I0915 07:16:50.910189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m02"
	I0915 07:16:51.460493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-670527-m02"
	
	
	==> kube-controller-manager [d779b01c4db5393047ba7de2867b1075aa86f443e7203418e2ec025eb71d95b4] <==
	I0915 07:13:11.877095       1 serving.go:386] Generated self-signed cert in-memory
	I0915 07:13:12.742551       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0915 07:13:12.742589       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:13:12.744022       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0915 07:13:12.744724       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0915 07:13:12.744868       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 07:13:12.744978       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0915 07:13:32.753231       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.54:8443/healthz\": dial tcp 192.168.39.54:8443: connect: connection refused"
	
	
	==> kube-proxy [687491bc79a5922606800e4fe843e0bca1955764e539a7a6c2bb5a4eebfcf62a] <==
	E0915 07:13:52.908669       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-670527\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0915 07:13:52.908729       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0915 07:13:52.908793       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 07:13:52.960766       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 07:13:52.960832       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 07:13:52.960915       1 server_linux.go:169] "Using iptables Proxier"
	I0915 07:13:52.963414       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 07:13:52.963715       1 server.go:483] "Version info" version="v1.31.1"
	I0915 07:13:52.963748       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:13:52.965637       1 config.go:199] "Starting service config controller"
	I0915 07:13:52.965700       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 07:13:52.965753       1 config.go:105] "Starting endpoint slice config controller"
	I0915 07:13:52.965773       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 07:13:52.966561       1 config.go:328] "Starting node config controller"
	I0915 07:13:52.966592       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0915 07:13:55.980700       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0915 07:13:55.981401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:13:55.981883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:13:55.981531       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:13:55.982041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:13:55.981598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:13:55.982115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0915 07:13:57.267046       1 shared_informer.go:320] Caches are synced for node config
	I0915 07:13:57.267464       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 07:13:57.466480       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [b75dfe3b6121cde198dfb6d09b2d56f1f5839523ee2495c8204eb1af347d1ff0] <==
	E0915 07:10:27.089505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:30.157685       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:30.159095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:30.159410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:30.159564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:30.160051       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:30.160112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:36.302224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:36.302692       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:36.302736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:36.302819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:36.302343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:36.302871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:45.516538       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:45.516698       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:48.588608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:48.588748       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:10:48.588914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:10:48.588973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:11:07.021210       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:11:07.021276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-670527&resourceVersion=1821\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:11:07.020628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:11:07.021549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0915 07:11:13.165237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825": dial tcp 192.168.39.254:8443: connect: no route to host
	E0915 07:11:13.165310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1825\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [5509d991aebbe20258eaa37cc4f213711618ca7d721723e2d9e3e87e9740e488] <==
	W0915 07:13:47.864765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.54:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0915 07:13:47.864886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.54:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0915 07:13:47.937677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.54:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0915 07:13:47.937796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.54:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0915 07:13:48.920330       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.54:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0915 07:13:48.920474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.54:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0915 07:13:49.029346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.54:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0915 07:13:49.029437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.54:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0915 07:13:49.039276       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.54:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0915 07:13:49.039347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.54:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0915 07:13:50.016764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.54:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0915 07:13:50.016888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.54:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0915 07:13:52.380979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 07:13:52.381056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:13:52.416779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 07:13:52.416895       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:13:52.417074       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 07:13:52.417246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 07:13:52.417598       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 07:13:52.417649       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0915 07:14:08.274914       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0915 07:15:28.868622       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2c2kx\": pod busybox-7dff88458-2c2kx is already assigned to node \"ha-670527-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-2c2kx" node="ha-670527-m04"
	E0915 07:15:28.868774       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod eb6a2702-6a11-4384-b088-0801e861c669(default/busybox-7dff88458-2c2kx) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-2c2kx"
	E0915 07:15:28.868803       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2c2kx\": pod busybox-7dff88458-2c2kx is already assigned to node \"ha-670527-m04\"" pod="default/busybox-7dff88458-2c2kx"
	I0915 07:15:28.868820       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-2c2kx" node="ha-670527-m04"
	
	
	==> kube-scheduler [e3475f73ce55b84ab2b6b323d7428d70129ec55667c640ae04e4b213aa96f62d] <==
	I0915 07:02:03.481353       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0915 07:04:42.984344       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gxwp9\": pod busybox-7dff88458-gxwp9 is already assigned to node \"ha-670527-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-gxwp9" node="ha-670527-m02"
	E0915 07:04:42.984530       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5fc959e1-a77e-415a-bbea-3dd4303e82d9(default/busybox-7dff88458-gxwp9) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-gxwp9"
	E0915 07:04:42.984580       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gxwp9\": pod busybox-7dff88458-gxwp9 is already assigned to node \"ha-670527-m02\"" pod="default/busybox-7dff88458-gxwp9"
	I0915 07:04:42.984652       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-gxwp9" node="ha-670527-m02"
	E0915 07:05:23.207787       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fq2lt\": pod kube-proxy-fq2lt is already assigned to node \"ha-670527-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fq2lt" node="ha-670527-m04"
	E0915 07:05:23.207903       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 50b6a6aa-70b7-41b5-9554-5fef223d25a4(kube-system/kube-proxy-fq2lt) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fq2lt"
	E0915 07:05:23.207927       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fq2lt\": pod kube-proxy-fq2lt is already assigned to node \"ha-670527-m04\"" pod="kube-system/kube-proxy-fq2lt"
	I0915 07:05:23.207964       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fq2lt" node="ha-670527-m04"
	E0915 07:11:17.519768       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0915 07:11:19.429657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0915 07:11:19.649853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0915 07:11:21.695009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0915 07:11:21.701104       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0915 07:11:21.860304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0915 07:11:22.541338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0915 07:11:23.108508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0915 07:11:24.626023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0915 07:11:26.928063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0915 07:11:27.572956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0915 07:11:27.831582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0915 07:11:28.383274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0915 07:11:28.615806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0915 07:11:30.537002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0915 07:11:32.559719       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 15 07:16:52 ha-670527 kubelet[1303]: E0915 07:16:52.770218    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384612768526866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:17:02 ha-670527 kubelet[1303]: E0915 07:17:02.498270    1303 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 15 07:17:02 ha-670527 kubelet[1303]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 15 07:17:02 ha-670527 kubelet[1303]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 15 07:17:02 ha-670527 kubelet[1303]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 15 07:17:02 ha-670527 kubelet[1303]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 15 07:17:02 ha-670527 kubelet[1303]: E0915 07:17:02.772666    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384622772356125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:17:02 ha-670527 kubelet[1303]: E0915 07:17:02.772690    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384622772356125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:17:12 ha-670527 kubelet[1303]: E0915 07:17:12.773947    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384632773560693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:17:12 ha-670527 kubelet[1303]: E0915 07:17:12.773989    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384632773560693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:17:22 ha-670527 kubelet[1303]: E0915 07:17:22.778215    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384642777698176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:17:22 ha-670527 kubelet[1303]: E0915 07:17:22.778310    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384642777698176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:17:32 ha-670527 kubelet[1303]: E0915 07:17:32.779545    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384652779247997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:17:32 ha-670527 kubelet[1303]: E0915 07:17:32.779570    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384652779247997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:17:42 ha-670527 kubelet[1303]: E0915 07:17:42.780837    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384662780615571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:17:42 ha-670527 kubelet[1303]: E0915 07:17:42.780879    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384662780615571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:17:52 ha-670527 kubelet[1303]: E0915 07:17:52.782414    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384672781978078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:17:52 ha-670527 kubelet[1303]: E0915 07:17:52.783345    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384672781978078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:18:02 ha-670527 kubelet[1303]: E0915 07:18:02.489399    1303 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 15 07:18:02 ha-670527 kubelet[1303]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 15 07:18:02 ha-670527 kubelet[1303]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 15 07:18:02 ha-670527 kubelet[1303]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 15 07:18:02 ha-670527 kubelet[1303]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 15 07:18:02 ha-670527 kubelet[1303]: E0915 07:18:02.785052    1303 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384682784862517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:18:02 ha-670527 kubelet[1303]: E0915 07:18:02.785084    1303 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384682784862517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:18:04.940503   35737 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19644-6166/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-670527 -n ha-670527
helpers_test.go:261: (dbg) Run:  kubectl --context ha-670527 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (333.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-127008
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-127008
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-127008: exit status 82 (2m1.843705492s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-127008-m03"  ...
	* Stopping node "multinode-127008-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-127008" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-127008 --wait=true -v=8 --alsologtostderr
E0915 07:36:02.684316   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:39.269606   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:56.198073   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-127008 --wait=true -v=8 --alsologtostderr: (3m29.726002717s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-127008
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-127008 -n multinode-127008
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-127008 logs -n 25: (1.514497698s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-127008 cp multinode-127008-m02:/home/docker/cp-test.txt                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4167936864/001/cp-test_multinode-127008-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-127008 cp multinode-127008-m02:/home/docker/cp-test.txt                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008:/home/docker/cp-test_multinode-127008-m02_multinode-127008.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n multinode-127008 sudo cat                                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | /home/docker/cp-test_multinode-127008-m02_multinode-127008.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-127008 cp multinode-127008-m02:/home/docker/cp-test.txt                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m03:/home/docker/cp-test_multinode-127008-m02_multinode-127008-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n multinode-127008-m03 sudo cat                                   | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | /home/docker/cp-test_multinode-127008-m02_multinode-127008-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-127008 cp testdata/cp-test.txt                                                | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-127008 cp multinode-127008-m03:/home/docker/cp-test.txt                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4167936864/001/cp-test_multinode-127008-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-127008 cp multinode-127008-m03:/home/docker/cp-test.txt                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008:/home/docker/cp-test_multinode-127008-m03_multinode-127008.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n multinode-127008 sudo cat                                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | /home/docker/cp-test_multinode-127008-m03_multinode-127008.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-127008 cp multinode-127008-m03:/home/docker/cp-test.txt                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m02:/home/docker/cp-test_multinode-127008-m03_multinode-127008-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n multinode-127008-m02 sudo cat                                   | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | /home/docker/cp-test_multinode-127008-m03_multinode-127008-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-127008 node stop m03                                                          | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	| node    | multinode-127008 node start                                                             | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-127008                                                                | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC |                     |
	| stop    | -p multinode-127008                                                                     | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC |                     |
	| start   | -p multinode-127008                                                                     | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:35 UTC | 15 Sep 24 07:38 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-127008                                                                | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:38 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 07:35:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 07:35:00.954733   45126 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:35:00.955001   45126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:35:00.955011   45126 out.go:358] Setting ErrFile to fd 2...
	I0915 07:35:00.955015   45126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:35:00.955207   45126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:35:00.955745   45126 out.go:352] Setting JSON to false
	I0915 07:35:00.956677   45126 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4647,"bootTime":1726381054,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 07:35:00.956777   45126 start.go:139] virtualization: kvm guest
	I0915 07:35:00.958970   45126 out.go:177] * [multinode-127008] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 07:35:00.960179   45126 notify.go:220] Checking for updates...
	I0915 07:35:00.960268   45126 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:35:00.961462   45126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:35:00.962842   45126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:35:00.964019   45126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:35:00.965294   45126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 07:35:00.966510   45126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:35:00.968289   45126 config.go:182] Loaded profile config "multinode-127008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:35:00.968412   45126 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:35:00.968835   45126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:35:00.968889   45126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:35:00.984994   45126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0915 07:35:00.985467   45126 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:35:00.986024   45126 main.go:141] libmachine: Using API Version  1
	I0915 07:35:00.986062   45126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:35:00.986441   45126 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:35:00.986593   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:35:01.022242   45126 out.go:177] * Using the kvm2 driver based on existing profile
	I0915 07:35:01.023749   45126 start.go:297] selected driver: kvm2
	I0915 07:35:01.023769   45126 start.go:901] validating driver "kvm2" against &{Name:multinode-127008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-127008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:35:01.023960   45126 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:35:01.024393   45126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:35:01.024488   45126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 07:35:01.039311   45126 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 07:35:01.040088   45126 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:35:01.040124   45126 cni.go:84] Creating CNI manager for ""
	I0915 07:35:01.040179   45126 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0915 07:35:01.040239   45126 start.go:340] cluster config:
	{Name:multinode-127008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-127008 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:35:01.040356   45126 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:35:01.042265   45126 out.go:177] * Starting "multinode-127008" primary control-plane node in "multinode-127008" cluster
	I0915 07:35:01.043603   45126 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:35:01.043645   45126 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 07:35:01.043653   45126 cache.go:56] Caching tarball of preloaded images
	I0915 07:35:01.043726   45126 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:35:01.043737   45126 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:35:01.043861   45126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/config.json ...
	I0915 07:35:01.044058   45126 start.go:360] acquireMachinesLock for multinode-127008: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:35:01.044100   45126 start.go:364] duration metric: took 23.436µs to acquireMachinesLock for "multinode-127008"
	I0915 07:35:01.044111   45126 start.go:96] Skipping create...Using existing machine configuration
	I0915 07:35:01.044116   45126 fix.go:54] fixHost starting: 
	I0915 07:35:01.044382   45126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:35:01.044412   45126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:35:01.058836   45126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42837
	I0915 07:35:01.059324   45126 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:35:01.059773   45126 main.go:141] libmachine: Using API Version  1
	I0915 07:35:01.059793   45126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:35:01.060071   45126 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:35:01.060259   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:35:01.060392   45126 main.go:141] libmachine: (multinode-127008) Calling .GetState
	I0915 07:35:01.062106   45126 fix.go:112] recreateIfNeeded on multinode-127008: state=Running err=<nil>
	W0915 07:35:01.062134   45126 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 07:35:01.064320   45126 out.go:177] * Updating the running kvm2 "multinode-127008" VM ...
	I0915 07:35:01.065721   45126 machine.go:93] provisionDockerMachine start ...
	I0915 07:35:01.065756   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:35:01.065972   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:35:01.068526   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.068970   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:35:01.068999   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.069087   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:35:01.069255   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.069411   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.069530   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:35:01.069670   45126 main.go:141] libmachine: Using SSH client type: native
	I0915 07:35:01.069868   45126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0915 07:35:01.069879   45126 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 07:35:01.183481   45126 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-127008
	
	I0915 07:35:01.183505   45126 main.go:141] libmachine: (multinode-127008) Calling .GetMachineName
	I0915 07:35:01.183719   45126 buildroot.go:166] provisioning hostname "multinode-127008"
	I0915 07:35:01.183740   45126 main.go:141] libmachine: (multinode-127008) Calling .GetMachineName
	I0915 07:35:01.183921   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:35:01.186468   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.186837   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:35:01.186865   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.187021   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:35:01.187209   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.187340   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.187552   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:35:01.187742   45126 main.go:141] libmachine: Using SSH client type: native
	I0915 07:35:01.187910   45126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0915 07:35:01.187926   45126 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-127008 && echo "multinode-127008" | sudo tee /etc/hostname
	I0915 07:35:01.306241   45126 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-127008
	
	I0915 07:35:01.306278   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:35:01.308993   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.309361   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:35:01.309403   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.309573   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:35:01.309746   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.309887   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.310007   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:35:01.310148   45126 main.go:141] libmachine: Using SSH client type: native
	I0915 07:35:01.310338   45126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0915 07:35:01.310354   45126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-127008' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-127008/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-127008' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:35:01.415164   45126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:35:01.415197   45126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 07:35:01.415215   45126 buildroot.go:174] setting up certificates
	I0915 07:35:01.415224   45126 provision.go:84] configureAuth start
	I0915 07:35:01.415252   45126 main.go:141] libmachine: (multinode-127008) Calling .GetMachineName
	I0915 07:35:01.415554   45126 main.go:141] libmachine: (multinode-127008) Calling .GetIP
	I0915 07:35:01.418195   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.418593   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:35:01.418629   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.418766   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:35:01.420920   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.421220   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:35:01.421246   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.421373   45126 provision.go:143] copyHostCerts
	I0915 07:35:01.421407   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:35:01.421444   45126 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 07:35:01.421453   45126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:35:01.421533   45126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 07:35:01.421657   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:35:01.421677   45126 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 07:35:01.421682   45126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:35:01.421707   45126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 07:35:01.421762   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:35:01.421779   45126 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 07:35:01.421782   45126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:35:01.421803   45126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 07:35:01.421896   45126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.multinode-127008 san=[127.0.0.1 192.168.39.241 localhost minikube multinode-127008]
	I0915 07:35:01.545313   45126 provision.go:177] copyRemoteCerts
	I0915 07:35:01.545372   45126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:35:01.545393   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:35:01.548002   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.548453   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:35:01.548491   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.548697   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:35:01.548886   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.549057   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:35:01.549234   45126 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/multinode-127008/id_rsa Username:docker}
	I0915 07:35:01.632985   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0915 07:35:01.633058   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:35:01.659513   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0915 07:35:01.659598   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0915 07:35:01.686974   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0915 07:35:01.687061   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 07:35:01.713790   45126 provision.go:87] duration metric: took 298.552813ms to configureAuth
	I0915 07:35:01.713850   45126 buildroot.go:189] setting minikube options for container-runtime
	I0915 07:35:01.714170   45126 config.go:182] Loaded profile config "multinode-127008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:35:01.714267   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:35:01.717177   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.717523   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:35:01.717554   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.717728   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:35:01.717935   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.718090   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.718240   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:35:01.718369   45126 main.go:141] libmachine: Using SSH client type: native
	I0915 07:35:01.718534   45126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0915 07:35:01.718553   45126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:36:32.502356   45126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:36:32.502388   45126 machine.go:96] duration metric: took 1m31.436651117s to provisionDockerMachine
	I0915 07:36:32.502404   45126 start.go:293] postStartSetup for "multinode-127008" (driver="kvm2")
	I0915 07:36:32.502417   45126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:36:32.502439   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:36:32.502768   45126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:36:32.502797   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:36:32.505846   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.506255   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:36:32.506276   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.506471   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:36:32.506660   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:36:32.506899   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:36:32.507031   45126 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/multinode-127008/id_rsa Username:docker}
	I0915 07:36:32.589664   45126 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:36:32.594064   45126 command_runner.go:130] > NAME=Buildroot
	I0915 07:36:32.594085   45126 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0915 07:36:32.594098   45126 command_runner.go:130] > ID=buildroot
	I0915 07:36:32.594104   45126 command_runner.go:130] > VERSION_ID=2023.02.9
	I0915 07:36:32.594110   45126 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0915 07:36:32.594376   45126 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 07:36:32.594408   45126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 07:36:32.594503   45126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 07:36:32.594579   45126 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 07:36:32.594588   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /etc/ssl/certs/131902.pem
	I0915 07:36:32.594677   45126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:36:32.603919   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:36:32.631957   45126 start.go:296] duration metric: took 129.540503ms for postStartSetup
	I0915 07:36:32.632000   45126 fix.go:56] duration metric: took 1m31.587884451s for fixHost
	I0915 07:36:32.632029   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:36:32.634754   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.635253   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:36:32.635275   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.635486   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:36:32.635674   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:36:32.635818   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:36:32.635928   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:36:32.636048   45126 main.go:141] libmachine: Using SSH client type: native
	I0915 07:36:32.636233   45126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0915 07:36:32.636246   45126 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 07:36:32.747090   45126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726385792.723897422
	
	I0915 07:36:32.747111   45126 fix.go:216] guest clock: 1726385792.723897422
	I0915 07:36:32.747121   45126 fix.go:229] Guest: 2024-09-15 07:36:32.723897422 +0000 UTC Remote: 2024-09-15 07:36:32.632004342 +0000 UTC m=+91.712737373 (delta=91.89308ms)
	I0915 07:36:32.747144   45126 fix.go:200] guest clock delta is within tolerance: 91.89308ms
	I0915 07:36:32.747150   45126 start.go:83] releasing machines lock for "multinode-127008", held for 1m31.703043829s
	I0915 07:36:32.747176   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:36:32.747478   45126 main.go:141] libmachine: (multinode-127008) Calling .GetIP
	I0915 07:36:32.750733   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.751273   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:36:32.751297   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.751493   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:36:32.752031   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:36:32.752237   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:36:32.752339   45126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:36:32.752381   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:36:32.752462   45126 ssh_runner.go:195] Run: cat /version.json
	I0915 07:36:32.752482   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:36:32.755177   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.755395   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.755609   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:36:32.755636   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.755809   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:36:32.755881   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:36:32.755910   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.755967   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:36:32.756054   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:36:32.756134   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:36:32.756215   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:36:32.756298   45126 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/multinode-127008/id_rsa Username:docker}
	I0915 07:36:32.756390   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:36:32.756538   45126 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/multinode-127008/id_rsa Username:docker}
	I0915 07:36:32.913453   45126 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0915 07:36:32.913530   45126 command_runner.go:130] > {"iso_version": "v1.34.0-1726358414-19644", "kicbase_version": "v0.0.45-1726281268-19643", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0915 07:36:32.913681   45126 ssh_runner.go:195] Run: systemctl --version
	I0915 07:36:32.919526   45126 command_runner.go:130] > systemd 252 (252)
	I0915 07:36:32.919554   45126 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0915 07:36:32.919758   45126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:36:33.102689   45126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 07:36:33.108757   45126 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0915 07:36:33.108830   45126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 07:36:33.108884   45126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:36:33.118506   45126 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0915 07:36:33.118530   45126 start.go:495] detecting cgroup driver to use...
	I0915 07:36:33.118594   45126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:36:33.135836   45126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:36:33.150605   45126 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:36:33.150663   45126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:36:33.164563   45126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:36:33.178132   45126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:36:33.320377   45126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:36:33.465271   45126 docker.go:233] disabling docker service ...
	I0915 07:36:33.465350   45126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:36:33.483545   45126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:36:33.496949   45126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:36:33.635434   45126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:36:33.781559   45126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:36:33.796250   45126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:36:33.815779   45126 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0915 07:36:33.815824   45126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:36:33.815880   45126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:36:33.827370   45126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:36:33.827455   45126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:36:33.838730   45126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:36:33.849411   45126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:36:33.859991   45126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:36:33.871461   45126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:36:33.882502   45126 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:36:33.893754   45126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:36:33.905143   45126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:36:33.914724   45126 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0915 07:36:33.914826   45126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:36:33.924236   45126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:36:34.058598   45126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:36:43.325857   45126 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.267217442s)
	I0915 07:36:43.325894   45126 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:36:43.325953   45126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:36:43.331511   45126 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0915 07:36:43.331540   45126 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0915 07:36:43.331551   45126 command_runner.go:130] > Device: 0,22	Inode: 1386        Links: 1
	I0915 07:36:43.331561   45126 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0915 07:36:43.331568   45126 command_runner.go:130] > Access: 2024-09-15 07:36:43.157037321 +0000
	I0915 07:36:43.331577   45126 command_runner.go:130] > Modify: 2024-09-15 07:36:43.157037321 +0000
	I0915 07:36:43.331586   45126 command_runner.go:130] > Change: 2024-09-15 07:36:43.157037321 +0000
	I0915 07:36:43.331613   45126 command_runner.go:130] >  Birth: -
	I0915 07:36:43.331635   45126 start.go:563] Will wait 60s for crictl version
	I0915 07:36:43.331676   45126 ssh_runner.go:195] Run: which crictl
	I0915 07:36:43.335659   45126 command_runner.go:130] > /usr/bin/crictl
	I0915 07:36:43.335736   45126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:36:43.373587   45126 command_runner.go:130] > Version:  0.1.0
	I0915 07:36:43.373617   45126 command_runner.go:130] > RuntimeName:  cri-o
	I0915 07:36:43.373624   45126 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0915 07:36:43.373632   45126 command_runner.go:130] > RuntimeApiVersion:  v1
	I0915 07:36:43.374910   45126 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 07:36:43.374993   45126 ssh_runner.go:195] Run: crio --version
	I0915 07:36:43.403125   45126 command_runner.go:130] > crio version 1.29.1
	I0915 07:36:43.403147   45126 command_runner.go:130] > Version:        1.29.1
	I0915 07:36:43.403152   45126 command_runner.go:130] > GitCommit:      unknown
	I0915 07:36:43.403157   45126 command_runner.go:130] > GitCommitDate:  unknown
	I0915 07:36:43.403161   45126 command_runner.go:130] > GitTreeState:   clean
	I0915 07:36:43.403166   45126 command_runner.go:130] > BuildDate:      2024-09-15T05:30:16Z
	I0915 07:36:43.403171   45126 command_runner.go:130] > GoVersion:      go1.21.6
	I0915 07:36:43.403174   45126 command_runner.go:130] > Compiler:       gc
	I0915 07:36:43.403179   45126 command_runner.go:130] > Platform:       linux/amd64
	I0915 07:36:43.403183   45126 command_runner.go:130] > Linkmode:       dynamic
	I0915 07:36:43.403187   45126 command_runner.go:130] > BuildTags:      
	I0915 07:36:43.403191   45126 command_runner.go:130] >   containers_image_ostree_stub
	I0915 07:36:43.403195   45126 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0915 07:36:43.403228   45126 command_runner.go:130] >   btrfs_noversion
	I0915 07:36:43.403240   45126 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0915 07:36:43.403244   45126 command_runner.go:130] >   libdm_no_deferred_remove
	I0915 07:36:43.403247   45126 command_runner.go:130] >   seccomp
	I0915 07:36:43.403252   45126 command_runner.go:130] > LDFlags:          unknown
	I0915 07:36:43.403258   45126 command_runner.go:130] > SeccompEnabled:   true
	I0915 07:36:43.403263   45126 command_runner.go:130] > AppArmorEnabled:  false
	I0915 07:36:43.404583   45126 ssh_runner.go:195] Run: crio --version
	I0915 07:36:43.437258   45126 command_runner.go:130] > crio version 1.29.1
	I0915 07:36:43.437285   45126 command_runner.go:130] > Version:        1.29.1
	I0915 07:36:43.437294   45126 command_runner.go:130] > GitCommit:      unknown
	I0915 07:36:43.437301   45126 command_runner.go:130] > GitCommitDate:  unknown
	I0915 07:36:43.437307   45126 command_runner.go:130] > GitTreeState:   clean
	I0915 07:36:43.437317   45126 command_runner.go:130] > BuildDate:      2024-09-15T05:30:16Z
	I0915 07:36:43.437323   45126 command_runner.go:130] > GoVersion:      go1.21.6
	I0915 07:36:43.437330   45126 command_runner.go:130] > Compiler:       gc
	I0915 07:36:43.437338   45126 command_runner.go:130] > Platform:       linux/amd64
	I0915 07:36:43.437345   45126 command_runner.go:130] > Linkmode:       dynamic
	I0915 07:36:43.437352   45126 command_runner.go:130] > BuildTags:      
	I0915 07:36:43.437359   45126 command_runner.go:130] >   containers_image_ostree_stub
	I0915 07:36:43.437367   45126 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0915 07:36:43.437376   45126 command_runner.go:130] >   btrfs_noversion
	I0915 07:36:43.437387   45126 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0915 07:36:43.437397   45126 command_runner.go:130] >   libdm_no_deferred_remove
	I0915 07:36:43.437403   45126 command_runner.go:130] >   seccomp
	I0915 07:36:43.437413   45126 command_runner.go:130] > LDFlags:          unknown
	I0915 07:36:43.437423   45126 command_runner.go:130] > SeccompEnabled:   true
	I0915 07:36:43.437432   45126 command_runner.go:130] > AppArmorEnabled:  false
	I0915 07:36:43.440531   45126 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 07:36:43.441726   45126 main.go:141] libmachine: (multinode-127008) Calling .GetIP
	I0915 07:36:43.444518   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:43.444860   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:36:43.444887   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:43.445121   45126 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0915 07:36:43.449545   45126 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0915 07:36:43.449643   45126 kubeadm.go:883] updating cluster {Name:multinode-127008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-127008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 07:36:43.449794   45126 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:36:43.449871   45126 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:36:43.492940   45126 command_runner.go:130] > {
	I0915 07:36:43.492965   45126 command_runner.go:130] >   "images": [
	I0915 07:36:43.492971   45126 command_runner.go:130] >     {
	I0915 07:36:43.492983   45126 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0915 07:36:43.492988   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.492995   45126 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0915 07:36:43.492998   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493002   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493009   45126 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0915 07:36:43.493018   45126 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0915 07:36:43.493023   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493029   45126 command_runner.go:130] >       "size": "87190579",
	I0915 07:36:43.493035   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.493042   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.493051   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.493062   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.493068   45126 command_runner.go:130] >     },
	I0915 07:36:43.493074   45126 command_runner.go:130] >     {
	I0915 07:36:43.493080   45126 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0915 07:36:43.493083   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.493090   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0915 07:36:43.493094   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493103   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493112   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0915 07:36:43.493126   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0915 07:36:43.493136   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493142   45126 command_runner.go:130] >       "size": "1363676",
	I0915 07:36:43.493152   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.493166   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.493175   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.493181   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.493185   45126 command_runner.go:130] >     },
	I0915 07:36:43.493189   45126 command_runner.go:130] >     {
	I0915 07:36:43.493203   45126 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0915 07:36:43.493209   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.493217   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0915 07:36:43.493225   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493232   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493248   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0915 07:36:43.493263   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0915 07:36:43.493272   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493279   45126 command_runner.go:130] >       "size": "31470524",
	I0915 07:36:43.493287   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.493293   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.493301   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.493306   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.493311   45126 command_runner.go:130] >     },
	I0915 07:36:43.493315   45126 command_runner.go:130] >     {
	I0915 07:36:43.493321   45126 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0915 07:36:43.493329   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.493340   45126 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0915 07:36:43.493349   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493359   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493371   45126 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0915 07:36:43.493391   45126 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0915 07:36:43.493401   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493409   45126 command_runner.go:130] >       "size": "63273227",
	I0915 07:36:43.493415   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.493422   45126 command_runner.go:130] >       "username": "nonroot",
	I0915 07:36:43.493431   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.493440   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.493449   45126 command_runner.go:130] >     },
	I0915 07:36:43.493458   45126 command_runner.go:130] >     {
	I0915 07:36:43.493470   45126 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0915 07:36:43.493479   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.493487   45126 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0915 07:36:43.493494   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493498   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493508   45126 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0915 07:36:43.493522   45126 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0915 07:36:43.493531   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493537   45126 command_runner.go:130] >       "size": "149009664",
	I0915 07:36:43.493546   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.493552   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.493560   45126 command_runner.go:130] >       },
	I0915 07:36:43.493567   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.493576   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.493581   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.493586   45126 command_runner.go:130] >     },
	I0915 07:36:43.493591   45126 command_runner.go:130] >     {
	I0915 07:36:43.493603   45126 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0915 07:36:43.493613   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.493622   45126 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0915 07:36:43.493630   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493640   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493654   45126 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0915 07:36:43.493668   45126 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0915 07:36:43.493677   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493685   45126 command_runner.go:130] >       "size": "95237600",
	I0915 07:36:43.493692   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.493698   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.493706   45126 command_runner.go:130] >       },
	I0915 07:36:43.493713   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.493723   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.493732   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.493740   45126 command_runner.go:130] >     },
	I0915 07:36:43.493748   45126 command_runner.go:130] >     {
	I0915 07:36:43.493761   45126 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0915 07:36:43.493769   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.493776   45126 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0915 07:36:43.493781   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493790   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493816   45126 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0915 07:36:43.493832   45126 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0915 07:36:43.493841   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493850   45126 command_runner.go:130] >       "size": "89437508",
	I0915 07:36:43.493858   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.493866   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.493873   45126 command_runner.go:130] >       },
	I0915 07:36:43.493880   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.493888   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.493896   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.493906   45126 command_runner.go:130] >     },
	I0915 07:36:43.493912   45126 command_runner.go:130] >     {
	I0915 07:36:43.493925   45126 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0915 07:36:43.493934   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.493945   45126 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0915 07:36:43.493953   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493962   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493979   45126 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0915 07:36:43.493992   45126 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0915 07:36:43.494001   45126 command_runner.go:130] >       ],
	I0915 07:36:43.494011   45126 command_runner.go:130] >       "size": "92733849",
	I0915 07:36:43.494021   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.494028   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.494035   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.494041   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.494046   45126 command_runner.go:130] >     },
	I0915 07:36:43.494051   45126 command_runner.go:130] >     {
	I0915 07:36:43.494061   45126 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0915 07:36:43.494066   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.494071   45126 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0915 07:36:43.494077   45126 command_runner.go:130] >       ],
	I0915 07:36:43.494084   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.494098   45126 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0915 07:36:43.494110   45126 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0915 07:36:43.494116   45126 command_runner.go:130] >       ],
	I0915 07:36:43.494122   45126 command_runner.go:130] >       "size": "68420934",
	I0915 07:36:43.494128   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.494134   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.494140   45126 command_runner.go:130] >       },
	I0915 07:36:43.494168   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.494175   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.494181   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.494186   45126 command_runner.go:130] >     },
	I0915 07:36:43.494192   45126 command_runner.go:130] >     {
	I0915 07:36:43.494207   45126 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0915 07:36:43.494216   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.494224   45126 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0915 07:36:43.494232   45126 command_runner.go:130] >       ],
	I0915 07:36:43.494239   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.494252   45126 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0915 07:36:43.494262   45126 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0915 07:36:43.494268   45126 command_runner.go:130] >       ],
	I0915 07:36:43.494280   45126 command_runner.go:130] >       "size": "742080",
	I0915 07:36:43.494289   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.494297   45126 command_runner.go:130] >         "value": "65535"
	I0915 07:36:43.494306   45126 command_runner.go:130] >       },
	I0915 07:36:43.494315   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.494325   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.494333   45126 command_runner.go:130] >       "pinned": true
	I0915 07:36:43.494341   45126 command_runner.go:130] >     }
	I0915 07:36:43.494349   45126 command_runner.go:130] >   ]
	I0915 07:36:43.494354   45126 command_runner.go:130] > }
	I0915 07:36:43.494833   45126 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 07:36:43.494858   45126 crio.go:433] Images already preloaded, skipping extraction
	I0915 07:36:43.494916   45126 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:36:43.531014   45126 command_runner.go:130] > {
	I0915 07:36:43.531035   45126 command_runner.go:130] >   "images": [
	I0915 07:36:43.531039   45126 command_runner.go:130] >     {
	I0915 07:36:43.531047   45126 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0915 07:36:43.531052   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.531058   45126 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0915 07:36:43.531061   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531065   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.531074   45126 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0915 07:36:43.531085   45126 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0915 07:36:43.531090   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531098   45126 command_runner.go:130] >       "size": "87190579",
	I0915 07:36:43.531104   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.531110   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.531118   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.531124   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.531134   45126 command_runner.go:130] >     },
	I0915 07:36:43.531139   45126 command_runner.go:130] >     {
	I0915 07:36:43.531146   45126 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0915 07:36:43.531153   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.531159   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0915 07:36:43.531162   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531167   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.531177   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0915 07:36:43.531192   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0915 07:36:43.531202   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531209   45126 command_runner.go:130] >       "size": "1363676",
	I0915 07:36:43.531218   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.531230   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.531237   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.531243   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.531250   45126 command_runner.go:130] >     },
	I0915 07:36:43.531256   45126 command_runner.go:130] >     {
	I0915 07:36:43.531272   45126 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0915 07:36:43.531282   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.531291   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0915 07:36:43.531299   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531306   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.531319   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0915 07:36:43.531330   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0915 07:36:43.531336   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531342   45126 command_runner.go:130] >       "size": "31470524",
	I0915 07:36:43.531352   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.531358   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.531365   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.531371   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.531379   45126 command_runner.go:130] >     },
	I0915 07:36:43.531385   45126 command_runner.go:130] >     {
	I0915 07:36:43.531402   45126 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0915 07:36:43.531410   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.531415   45126 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0915 07:36:43.531421   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531431   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.531446   45126 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0915 07:36:43.531468   45126 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0915 07:36:43.531478   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531485   45126 command_runner.go:130] >       "size": "63273227",
	I0915 07:36:43.531491   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.531496   45126 command_runner.go:130] >       "username": "nonroot",
	I0915 07:36:43.531499   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.531505   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.531514   45126 command_runner.go:130] >     },
	I0915 07:36:43.531519   45126 command_runner.go:130] >     {
	I0915 07:36:43.531531   45126 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0915 07:36:43.531541   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.531548   45126 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0915 07:36:43.531564   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531574   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.531582   45126 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0915 07:36:43.531591   45126 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0915 07:36:43.531600   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531608   45126 command_runner.go:130] >       "size": "149009664",
	I0915 07:36:43.531617   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.531626   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.531635   45126 command_runner.go:130] >       },
	I0915 07:36:43.531644   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.531653   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.531662   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.531668   45126 command_runner.go:130] >     },
	I0915 07:36:43.531672   45126 command_runner.go:130] >     {
	I0915 07:36:43.531683   45126 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0915 07:36:43.531692   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.531703   45126 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0915 07:36:43.531712   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531721   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.531736   45126 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0915 07:36:43.531749   45126 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0915 07:36:43.531755   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531760   45126 command_runner.go:130] >       "size": "95237600",
	I0915 07:36:43.531769   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.531778   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.531787   45126 command_runner.go:130] >       },
	I0915 07:36:43.531796   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.531805   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.531814   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.531822   45126 command_runner.go:130] >     },
	I0915 07:36:43.531831   45126 command_runner.go:130] >     {
	I0915 07:36:43.531840   45126 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0915 07:36:43.531847   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.531862   45126 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0915 07:36:43.531872   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531882   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.531896   45126 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0915 07:36:43.531911   45126 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0915 07:36:43.531920   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531926   45126 command_runner.go:130] >       "size": "89437508",
	I0915 07:36:43.531930   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.531939   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.531947   45126 command_runner.go:130] >       },
	I0915 07:36:43.531957   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.531966   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.531973   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.531981   45126 command_runner.go:130] >     },
	I0915 07:36:43.531989   45126 command_runner.go:130] >     {
	I0915 07:36:43.532001   45126 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0915 07:36:43.532008   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.532014   45126 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0915 07:36:43.532021   45126 command_runner.go:130] >       ],
	I0915 07:36:43.532030   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.532065   45126 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0915 07:36:43.532080   45126 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0915 07:36:43.532085   45126 command_runner.go:130] >       ],
	I0915 07:36:43.532091   45126 command_runner.go:130] >       "size": "92733849",
	I0915 07:36:43.532098   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.532102   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.532111   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.532117   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.532123   45126 command_runner.go:130] >     },
	I0915 07:36:43.532128   45126 command_runner.go:130] >     {
	I0915 07:36:43.532137   45126 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0915 07:36:43.532146   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.532156   45126 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0915 07:36:43.532171   45126 command_runner.go:130] >       ],
	I0915 07:36:43.532179   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.532186   45126 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0915 07:36:43.532199   45126 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0915 07:36:43.532208   45126 command_runner.go:130] >       ],
	I0915 07:36:43.532215   45126 command_runner.go:130] >       "size": "68420934",
	I0915 07:36:43.532224   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.532231   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.532239   45126 command_runner.go:130] >       },
	I0915 07:36:43.532246   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.532255   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.532261   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.532268   45126 command_runner.go:130] >     },
	I0915 07:36:43.532271   45126 command_runner.go:130] >     {
	I0915 07:36:43.532279   45126 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0915 07:36:43.532288   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.532299   45126 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0915 07:36:43.532307   45126 command_runner.go:130] >       ],
	I0915 07:36:43.532316   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.532327   45126 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0915 07:36:43.532340   45126 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0915 07:36:43.532348   45126 command_runner.go:130] >       ],
	I0915 07:36:43.532355   45126 command_runner.go:130] >       "size": "742080",
	I0915 07:36:43.532360   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.532366   45126 command_runner.go:130] >         "value": "65535"
	I0915 07:36:43.532375   45126 command_runner.go:130] >       },
	I0915 07:36:43.532381   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.532390   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.532405   45126 command_runner.go:130] >       "pinned": true
	I0915 07:36:43.532413   45126 command_runner.go:130] >     }
	I0915 07:36:43.532418   45126 command_runner.go:130] >   ]
	I0915 07:36:43.532425   45126 command_runner.go:130] > }
	I0915 07:36:43.532578   45126 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 07:36:43.532600   45126 cache_images.go:84] Images are preloaded, skipping loading
	I0915 07:36:43.532608   45126 kubeadm.go:934] updating node { 192.168.39.241 8443 v1.31.1 crio true true} ...
	I0915 07:36:43.532727   45126 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-127008 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-127008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:36:43.532809   45126 ssh_runner.go:195] Run: crio config
	I0915 07:36:43.572249   45126 command_runner.go:130] ! time="2024-09-15 07:36:43.549944692Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0915 07:36:43.577673   45126 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0915 07:36:43.583465   45126 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0915 07:36:43.583488   45126 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0915 07:36:43.583495   45126 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0915 07:36:43.583498   45126 command_runner.go:130] > #
	I0915 07:36:43.583505   45126 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0915 07:36:43.583511   45126 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0915 07:36:43.583518   45126 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0915 07:36:43.583527   45126 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0915 07:36:43.583533   45126 command_runner.go:130] > # reload'.
	I0915 07:36:43.583543   45126 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0915 07:36:43.583555   45126 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0915 07:36:43.583568   45126 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0915 07:36:43.583575   45126 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0915 07:36:43.583578   45126 command_runner.go:130] > [crio]
	I0915 07:36:43.583595   45126 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0915 07:36:43.583602   45126 command_runner.go:130] > # containers images, in this directory.
	I0915 07:36:43.583607   45126 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0915 07:36:43.583623   45126 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0915 07:36:43.583634   45126 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0915 07:36:43.583647   45126 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0915 07:36:43.583661   45126 command_runner.go:130] > # imagestore = ""
	I0915 07:36:43.583683   45126 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0915 07:36:43.583694   45126 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0915 07:36:43.583699   45126 command_runner.go:130] > storage_driver = "overlay"
	I0915 07:36:43.583704   45126 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0915 07:36:43.583715   45126 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0915 07:36:43.583724   45126 command_runner.go:130] > storage_option = [
	I0915 07:36:43.583735   45126 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0915 07:36:43.583741   45126 command_runner.go:130] > ]
	I0915 07:36:43.583755   45126 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0915 07:36:43.583767   45126 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0915 07:36:43.583777   45126 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0915 07:36:43.583789   45126 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0915 07:36:43.583801   45126 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0915 07:36:43.583807   45126 command_runner.go:130] > # always happen on a node reboot
	I0915 07:36:43.583814   45126 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0915 07:36:43.583835   45126 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0915 07:36:43.583848   45126 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0915 07:36:43.583859   45126 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0915 07:36:43.583870   45126 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0915 07:36:43.583883   45126 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0915 07:36:43.583898   45126 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0915 07:36:43.583907   45126 command_runner.go:130] > # internal_wipe = true
	I0915 07:36:43.583918   45126 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0915 07:36:43.583929   45126 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0915 07:36:43.583938   45126 command_runner.go:130] > # internal_repair = false
	I0915 07:36:43.583947   45126 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0915 07:36:43.583961   45126 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0915 07:36:43.583972   45126 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0915 07:36:43.583984   45126 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0915 07:36:43.583995   45126 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0915 07:36:43.584004   45126 command_runner.go:130] > [crio.api]
	I0915 07:36:43.584012   45126 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0915 07:36:43.584022   45126 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0915 07:36:43.584035   45126 command_runner.go:130] > # IP address on which the stream server will listen.
	I0915 07:36:43.584045   45126 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0915 07:36:43.584059   45126 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0915 07:36:43.584070   45126 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0915 07:36:43.584079   45126 command_runner.go:130] > # stream_port = "0"
	I0915 07:36:43.584090   45126 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0915 07:36:43.584098   45126 command_runner.go:130] > # stream_enable_tls = false
	I0915 07:36:43.584105   45126 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0915 07:36:43.584113   45126 command_runner.go:130] > # stream_idle_timeout = ""
	I0915 07:36:43.584126   45126 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0915 07:36:43.584139   45126 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0915 07:36:43.584145   45126 command_runner.go:130] > # minutes.
	I0915 07:36:43.584151   45126 command_runner.go:130] > # stream_tls_cert = ""
	I0915 07:36:43.584163   45126 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0915 07:36:43.584175   45126 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0915 07:36:43.584184   45126 command_runner.go:130] > # stream_tls_key = ""
	I0915 07:36:43.584190   45126 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0915 07:36:43.584207   45126 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0915 07:36:43.584231   45126 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0915 07:36:43.584241   45126 command_runner.go:130] > # stream_tls_ca = ""
	I0915 07:36:43.584253   45126 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0915 07:36:43.584261   45126 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0915 07:36:43.584273   45126 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0915 07:36:43.584283   45126 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0915 07:36:43.584291   45126 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0915 07:36:43.584301   45126 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0915 07:36:43.584310   45126 command_runner.go:130] > [crio.runtime]
	I0915 07:36:43.584320   45126 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0915 07:36:43.584332   45126 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0915 07:36:43.584342   45126 command_runner.go:130] > # "nofile=1024:2048"
	I0915 07:36:43.584354   45126 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0915 07:36:43.584364   45126 command_runner.go:130] > # default_ulimits = [
	I0915 07:36:43.584372   45126 command_runner.go:130] > # ]
	I0915 07:36:43.584379   45126 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0915 07:36:43.584387   45126 command_runner.go:130] > # no_pivot = false
	I0915 07:36:43.584397   45126 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0915 07:36:43.584410   45126 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0915 07:36:43.584421   45126 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0915 07:36:43.584433   45126 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0915 07:36:43.584444   45126 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0915 07:36:43.584458   45126 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0915 07:36:43.584468   45126 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0915 07:36:43.584476   45126 command_runner.go:130] > # Cgroup setting for conmon
	I0915 07:36:43.584487   45126 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0915 07:36:43.584497   45126 command_runner.go:130] > conmon_cgroup = "pod"
	I0915 07:36:43.584509   45126 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0915 07:36:43.584520   45126 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0915 07:36:43.584533   45126 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0915 07:36:43.584543   45126 command_runner.go:130] > conmon_env = [
	I0915 07:36:43.584555   45126 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0915 07:36:43.584562   45126 command_runner.go:130] > ]
	I0915 07:36:43.584567   45126 command_runner.go:130] > # Additional environment variables to set for all the
	I0915 07:36:43.584578   45126 command_runner.go:130] > # containers. These are overridden if set in the
	I0915 07:36:43.584590   45126 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0915 07:36:43.584600   45126 command_runner.go:130] > # default_env = [
	I0915 07:36:43.584608   45126 command_runner.go:130] > # ]
	I0915 07:36:43.584622   45126 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0915 07:36:43.584637   45126 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0915 07:36:43.584645   45126 command_runner.go:130] > # selinux = false
	I0915 07:36:43.584656   45126 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0915 07:36:43.584665   45126 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0915 07:36:43.584676   45126 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0915 07:36:43.584687   45126 command_runner.go:130] > # seccomp_profile = ""
	I0915 07:36:43.584696   45126 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0915 07:36:43.584708   45126 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0915 07:36:43.584721   45126 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0915 07:36:43.584731   45126 command_runner.go:130] > # which might increase security.
	I0915 07:36:43.584742   45126 command_runner.go:130] > # This option is currently deprecated,
	I0915 07:36:43.584753   45126 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0915 07:36:43.584760   45126 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0915 07:36:43.584769   45126 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0915 07:36:43.584782   45126 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0915 07:36:43.584795   45126 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0915 07:36:43.584808   45126 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0915 07:36:43.584818   45126 command_runner.go:130] > # This option supports live configuration reload.
	I0915 07:36:43.584828   45126 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0915 07:36:43.584838   45126 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0915 07:36:43.584845   45126 command_runner.go:130] > # the cgroup blockio controller.
	I0915 07:36:43.584852   45126 command_runner.go:130] > # blockio_config_file = ""
	I0915 07:36:43.584865   45126 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0915 07:36:43.584874   45126 command_runner.go:130] > # blockio parameters.
	I0915 07:36:43.584883   45126 command_runner.go:130] > # blockio_reload = false
	I0915 07:36:43.584896   45126 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0915 07:36:43.584905   45126 command_runner.go:130] > # irqbalance daemon.
	I0915 07:36:43.584916   45126 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0915 07:36:43.584926   45126 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0915 07:36:43.584937   45126 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0915 07:36:43.584951   45126 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0915 07:36:43.584963   45126 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0915 07:36:43.584976   45126 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0915 07:36:43.585011   45126 command_runner.go:130] > # This option supports live configuration reload.
	I0915 07:36:43.585024   45126 command_runner.go:130] > # rdt_config_file = ""
	I0915 07:36:43.585033   45126 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0915 07:36:43.585044   45126 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0915 07:36:43.585066   45126 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0915 07:36:43.585076   45126 command_runner.go:130] > # separate_pull_cgroup = ""
	I0915 07:36:43.585089   45126 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0915 07:36:43.585101   45126 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0915 07:36:43.585109   45126 command_runner.go:130] > # will be added.
	I0915 07:36:43.585116   45126 command_runner.go:130] > # default_capabilities = [
	I0915 07:36:43.585122   45126 command_runner.go:130] > # 	"CHOWN",
	I0915 07:36:43.585130   45126 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0915 07:36:43.585139   45126 command_runner.go:130] > # 	"FSETID",
	I0915 07:36:43.585146   45126 command_runner.go:130] > # 	"FOWNER",
	I0915 07:36:43.585151   45126 command_runner.go:130] > # 	"SETGID",
	I0915 07:36:43.585160   45126 command_runner.go:130] > # 	"SETUID",
	I0915 07:36:43.585169   45126 command_runner.go:130] > # 	"SETPCAP",
	I0915 07:36:43.585178   45126 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0915 07:36:43.585187   45126 command_runner.go:130] > # 	"KILL",
	I0915 07:36:43.585201   45126 command_runner.go:130] > # ]
	I0915 07:36:43.585215   45126 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0915 07:36:43.585224   45126 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0915 07:36:43.585232   45126 command_runner.go:130] > # add_inheritable_capabilities = false
	I0915 07:36:43.585245   45126 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0915 07:36:43.585255   45126 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0915 07:36:43.585264   45126 command_runner.go:130] > default_sysctls = [
	I0915 07:36:43.585271   45126 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0915 07:36:43.585278   45126 command_runner.go:130] > ]
	I0915 07:36:43.585286   45126 command_runner.go:130] > # List of devices on the host that a
	I0915 07:36:43.585299   45126 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0915 07:36:43.585307   45126 command_runner.go:130] > # allowed_devices = [
	I0915 07:36:43.585314   45126 command_runner.go:130] > # 	"/dev/fuse",
	I0915 07:36:43.585317   45126 command_runner.go:130] > # ]
	I0915 07:36:43.585327   45126 command_runner.go:130] > # List of additional devices. specified as
	I0915 07:36:43.585343   45126 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0915 07:36:43.585356   45126 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0915 07:36:43.585368   45126 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0915 07:36:43.585377   45126 command_runner.go:130] > # additional_devices = [
	I0915 07:36:43.585386   45126 command_runner.go:130] > # ]
	I0915 07:36:43.585394   45126 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0915 07:36:43.585401   45126 command_runner.go:130] > # cdi_spec_dirs = [
	I0915 07:36:43.585405   45126 command_runner.go:130] > # 	"/etc/cdi",
	I0915 07:36:43.585414   45126 command_runner.go:130] > # 	"/var/run/cdi",
	I0915 07:36:43.585422   45126 command_runner.go:130] > # ]
	I0915 07:36:43.585432   45126 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0915 07:36:43.585445   45126 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0915 07:36:43.585453   45126 command_runner.go:130] > # Defaults to false.
	I0915 07:36:43.585464   45126 command_runner.go:130] > # device_ownership_from_security_context = false
	I0915 07:36:43.585476   45126 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0915 07:36:43.585487   45126 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0915 07:36:43.585493   45126 command_runner.go:130] > # hooks_dir = [
	I0915 07:36:43.585500   45126 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0915 07:36:43.585508   45126 command_runner.go:130] > # ]
	I0915 07:36:43.585521   45126 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0915 07:36:43.585534   45126 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0915 07:36:43.585545   45126 command_runner.go:130] > # its default mounts from the following two files:
	I0915 07:36:43.585552   45126 command_runner.go:130] > #
	I0915 07:36:43.585561   45126 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0915 07:36:43.585574   45126 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0915 07:36:43.585583   45126 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0915 07:36:43.585587   45126 command_runner.go:130] > #
	I0915 07:36:43.585593   45126 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0915 07:36:43.585603   45126 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0915 07:36:43.585613   45126 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0915 07:36:43.585622   45126 command_runner.go:130] > #      only add mounts it finds in this file.
	I0915 07:36:43.585626   45126 command_runner.go:130] > #
	I0915 07:36:43.585633   45126 command_runner.go:130] > # default_mounts_file = ""
	I0915 07:36:43.585642   45126 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0915 07:36:43.585653   45126 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0915 07:36:43.585662   45126 command_runner.go:130] > pids_limit = 1024
	I0915 07:36:43.585673   45126 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0915 07:36:43.585685   45126 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0915 07:36:43.585697   45126 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0915 07:36:43.585709   45126 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0915 07:36:43.585715   45126 command_runner.go:130] > # log_size_max = -1
	I0915 07:36:43.585727   45126 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0915 07:36:43.585738   45126 command_runner.go:130] > # log_to_journald = false
	I0915 07:36:43.585748   45126 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0915 07:36:43.585760   45126 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0915 07:36:43.585770   45126 command_runner.go:130] > # Path to directory for container attach sockets.
	I0915 07:36:43.585780   45126 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0915 07:36:43.585789   45126 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0915 07:36:43.585798   45126 command_runner.go:130] > # bind_mount_prefix = ""
	I0915 07:36:43.585816   45126 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0915 07:36:43.585826   45126 command_runner.go:130] > # read_only = false
	I0915 07:36:43.585836   45126 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0915 07:36:43.585849   45126 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0915 07:36:43.585859   45126 command_runner.go:130] > # live configuration reload.
	I0915 07:36:43.585866   45126 command_runner.go:130] > # log_level = "info"
	I0915 07:36:43.585877   45126 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0915 07:36:43.585888   45126 command_runner.go:130] > # This option supports live configuration reload.
	I0915 07:36:43.585894   45126 command_runner.go:130] > # log_filter = ""
	I0915 07:36:43.585903   45126 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0915 07:36:43.585914   45126 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0915 07:36:43.585923   45126 command_runner.go:130] > # separated by comma.
	I0915 07:36:43.585938   45126 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0915 07:36:43.585947   45126 command_runner.go:130] > # uid_mappings = ""
	I0915 07:36:43.585958   45126 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0915 07:36:43.585970   45126 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0915 07:36:43.585980   45126 command_runner.go:130] > # separated by comma.
	I0915 07:36:43.585991   45126 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0915 07:36:43.585997   45126 command_runner.go:130] > # gid_mappings = ""
	I0915 07:36:43.586007   45126 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0915 07:36:43.586020   45126 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0915 07:36:43.586033   45126 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0915 07:36:43.586048   45126 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0915 07:36:43.586058   45126 command_runner.go:130] > # minimum_mappable_uid = -1
	I0915 07:36:43.586070   45126 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0915 07:36:43.586085   45126 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0915 07:36:43.586096   45126 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0915 07:36:43.586111   45126 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0915 07:36:43.586120   45126 command_runner.go:130] > # minimum_mappable_gid = -1
	I0915 07:36:43.586131   45126 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0915 07:36:43.586140   45126 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0915 07:36:43.586149   45126 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0915 07:36:43.586158   45126 command_runner.go:130] > # ctr_stop_timeout = 30
	I0915 07:36:43.586170   45126 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0915 07:36:43.586182   45126 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0915 07:36:43.586190   45126 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0915 07:36:43.586203   45126 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0915 07:36:43.586214   45126 command_runner.go:130] > drop_infra_ctr = false
	I0915 07:36:43.586227   45126 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0915 07:36:43.586239   45126 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0915 07:36:43.586253   45126 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0915 07:36:43.586260   45126 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0915 07:36:43.586270   45126 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0915 07:36:43.586277   45126 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0915 07:36:43.586284   45126 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0915 07:36:43.586293   45126 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0915 07:36:43.586304   45126 command_runner.go:130] > # shared_cpuset = ""
	I0915 07:36:43.586314   45126 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0915 07:36:43.586325   45126 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0915 07:36:43.586334   45126 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0915 07:36:43.586348   45126 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0915 07:36:43.586358   45126 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0915 07:36:43.586370   45126 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0915 07:36:43.586379   45126 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0915 07:36:43.586388   45126 command_runner.go:130] > # enable_criu_support = false
	I0915 07:36:43.586396   45126 command_runner.go:130] > # Enable/disable the generation of the container,
	I0915 07:36:43.586408   45126 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0915 07:36:43.586419   45126 command_runner.go:130] > # enable_pod_events = false
	I0915 07:36:43.586438   45126 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0915 07:36:43.586450   45126 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0915 07:36:43.586546   45126 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0915 07:36:43.586563   45126 command_runner.go:130] > # default_runtime = "runc"
	I0915 07:36:43.586575   45126 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0915 07:36:43.586589   45126 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0915 07:36:43.586627   45126 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0915 07:36:43.586642   45126 command_runner.go:130] > # creation as a file is not desired either.
	I0915 07:36:43.586660   45126 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0915 07:36:43.586672   45126 command_runner.go:130] > # the hostname is being managed dynamically.
	I0915 07:36:43.586682   45126 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0915 07:36:43.586689   45126 command_runner.go:130] > # ]
	I0915 07:36:43.586699   45126 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0915 07:36:43.586709   45126 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0915 07:36:43.586719   45126 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0915 07:36:43.586730   45126 command_runner.go:130] > # Each entry in the table should follow the format:
	I0915 07:36:43.586738   45126 command_runner.go:130] > #
	I0915 07:36:43.586749   45126 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0915 07:36:43.586760   45126 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0915 07:36:43.586807   45126 command_runner.go:130] > # runtime_type = "oci"
	I0915 07:36:43.586819   45126 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0915 07:36:43.586830   45126 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0915 07:36:43.586840   45126 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0915 07:36:43.586850   45126 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0915 07:36:43.586860   45126 command_runner.go:130] > # monitor_env = []
	I0915 07:36:43.586870   45126 command_runner.go:130] > # privileged_without_host_devices = false
	I0915 07:36:43.586880   45126 command_runner.go:130] > # allowed_annotations = []
	I0915 07:36:43.586889   45126 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0915 07:36:43.586897   45126 command_runner.go:130] > # Where:
	I0915 07:36:43.586905   45126 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0915 07:36:43.586918   45126 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0915 07:36:43.586929   45126 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0915 07:36:43.586944   45126 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0915 07:36:43.586954   45126 command_runner.go:130] > #   in $PATH.
	I0915 07:36:43.586964   45126 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0915 07:36:43.586974   45126 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0915 07:36:43.586983   45126 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0915 07:36:43.586990   45126 command_runner.go:130] > #   state.
	I0915 07:36:43.586999   45126 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0915 07:36:43.587013   45126 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0915 07:36:43.587027   45126 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0915 07:36:43.587039   45126 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0915 07:36:43.587052   45126 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0915 07:36:43.587066   45126 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0915 07:36:43.587076   45126 command_runner.go:130] > #   The currently recognized values are:
	I0915 07:36:43.587083   45126 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0915 07:36:43.587106   45126 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0915 07:36:43.587120   45126 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0915 07:36:43.587130   45126 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0915 07:36:43.587145   45126 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0915 07:36:43.587158   45126 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0915 07:36:43.587172   45126 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0915 07:36:43.587184   45126 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0915 07:36:43.587193   45126 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0915 07:36:43.587202   45126 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0915 07:36:43.587213   45126 command_runner.go:130] > #   deprecated option "conmon".
	I0915 07:36:43.587224   45126 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0915 07:36:43.587236   45126 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0915 07:36:43.587250   45126 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0915 07:36:43.587262   45126 command_runner.go:130] > #   should be moved to the container's cgroup
	I0915 07:36:43.587275   45126 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0915 07:36:43.587283   45126 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0915 07:36:43.587290   45126 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0915 07:36:43.587302   45126 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0915 07:36:43.587311   45126 command_runner.go:130] > #
	I0915 07:36:43.587320   45126 command_runner.go:130] > # Using the seccomp notifier feature:
	I0915 07:36:43.587330   45126 command_runner.go:130] > #
	I0915 07:36:43.587340   45126 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0915 07:36:43.587353   45126 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0915 07:36:43.587361   45126 command_runner.go:130] > #
	I0915 07:36:43.587371   45126 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0915 07:36:43.587382   45126 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0915 07:36:43.587388   45126 command_runner.go:130] > #
	I0915 07:36:43.587397   45126 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0915 07:36:43.587405   45126 command_runner.go:130] > # feature.
	I0915 07:36:43.587411   45126 command_runner.go:130] > #
	I0915 07:36:43.587420   45126 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0915 07:36:43.587433   45126 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0915 07:36:43.587446   45126 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0915 07:36:43.587458   45126 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0915 07:36:43.587471   45126 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0915 07:36:43.587478   45126 command_runner.go:130] > #
	I0915 07:36:43.587485   45126 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0915 07:36:43.587498   45126 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0915 07:36:43.587507   45126 command_runner.go:130] > #
	I0915 07:36:43.587517   45126 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0915 07:36:43.587529   45126 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0915 07:36:43.587537   45126 command_runner.go:130] > #
	I0915 07:36:43.587547   45126 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0915 07:36:43.587559   45126 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0915 07:36:43.587569   45126 command_runner.go:130] > # limitation.
	I0915 07:36:43.587578   45126 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0915 07:36:43.587584   45126 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0915 07:36:43.587591   45126 command_runner.go:130] > runtime_type = "oci"
	I0915 07:36:43.587601   45126 command_runner.go:130] > runtime_root = "/run/runc"
	I0915 07:36:43.587611   45126 command_runner.go:130] > runtime_config_path = ""
	I0915 07:36:43.587621   45126 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0915 07:36:43.587630   45126 command_runner.go:130] > monitor_cgroup = "pod"
	I0915 07:36:43.587640   45126 command_runner.go:130] > monitor_exec_cgroup = ""
	I0915 07:36:43.587650   45126 command_runner.go:130] > monitor_env = [
	I0915 07:36:43.587661   45126 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0915 07:36:43.587667   45126 command_runner.go:130] > ]
	I0915 07:36:43.587674   45126 command_runner.go:130] > privileged_without_host_devices = false
	I0915 07:36:43.587687   45126 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0915 07:36:43.587699   45126 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0915 07:36:43.587710   45126 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0915 07:36:43.587725   45126 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0915 07:36:43.587739   45126 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0915 07:36:43.587752   45126 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0915 07:36:43.587767   45126 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0915 07:36:43.587779   45126 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0915 07:36:43.587791   45126 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0915 07:36:43.587805   45126 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0915 07:36:43.587815   45126 command_runner.go:130] > # Example:
	I0915 07:36:43.587825   45126 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0915 07:36:43.587836   45126 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0915 07:36:43.587846   45126 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0915 07:36:43.587857   45126 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0915 07:36:43.587865   45126 command_runner.go:130] > # cpuset = 0
	I0915 07:36:43.587869   45126 command_runner.go:130] > # cpushares = "0-1"
	I0915 07:36:43.587875   45126 command_runner.go:130] > # Where:
	I0915 07:36:43.587882   45126 command_runner.go:130] > # The workload name is workload-type.
	I0915 07:36:43.587897   45126 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0915 07:36:43.587910   45126 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0915 07:36:43.587922   45126 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0915 07:36:43.587937   45126 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0915 07:36:43.587949   45126 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0915 07:36:43.587957   45126 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0915 07:36:43.587968   45126 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0915 07:36:43.587978   45126 command_runner.go:130] > # Default value is set to true
	I0915 07:36:43.587989   45126 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0915 07:36:43.588000   45126 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0915 07:36:43.588011   45126 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0915 07:36:43.588020   45126 command_runner.go:130] > # Default value is set to 'false'
	I0915 07:36:43.588031   45126 command_runner.go:130] > # disable_hostport_mapping = false
	I0915 07:36:43.588039   45126 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0915 07:36:43.588042   45126 command_runner.go:130] > #
	I0915 07:36:43.588049   45126 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0915 07:36:43.588059   45126 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0915 07:36:43.588070   45126 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0915 07:36:43.588079   45126 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0915 07:36:43.588088   45126 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0915 07:36:43.588093   45126 command_runner.go:130] > [crio.image]
	I0915 07:36:43.588127   45126 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0915 07:36:43.588133   45126 command_runner.go:130] > # default_transport = "docker://"
	I0915 07:36:43.588142   45126 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0915 07:36:43.588153   45126 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0915 07:36:43.588159   45126 command_runner.go:130] > # global_auth_file = ""
	I0915 07:36:43.588167   45126 command_runner.go:130] > # The image used to instantiate infra containers.
	I0915 07:36:43.588175   45126 command_runner.go:130] > # This option supports live configuration reload.
	I0915 07:36:43.588182   45126 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0915 07:36:43.588192   45126 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0915 07:36:43.588201   45126 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0915 07:36:43.588208   45126 command_runner.go:130] > # This option supports live configuration reload.
	I0915 07:36:43.588213   45126 command_runner.go:130] > # pause_image_auth_file = ""
	I0915 07:36:43.588219   45126 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0915 07:36:43.588229   45126 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0915 07:36:43.588242   45126 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0915 07:36:43.588252   45126 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0915 07:36:43.588263   45126 command_runner.go:130] > # pause_command = "/pause"
	I0915 07:36:43.588272   45126 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0915 07:36:43.588284   45126 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0915 07:36:43.588295   45126 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0915 07:36:43.588308   45126 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0915 07:36:43.588317   45126 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0915 07:36:43.588330   45126 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0915 07:36:43.588340   45126 command_runner.go:130] > # pinned_images = [
	I0915 07:36:43.588346   45126 command_runner.go:130] > # ]
	I0915 07:36:43.588358   45126 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0915 07:36:43.588372   45126 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0915 07:36:43.588385   45126 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0915 07:36:43.588398   45126 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0915 07:36:43.588409   45126 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0915 07:36:43.588417   45126 command_runner.go:130] > # signature_policy = ""
	I0915 07:36:43.588423   45126 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0915 07:36:43.588436   45126 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0915 07:36:43.588449   45126 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0915 07:36:43.588463   45126 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0915 07:36:43.588475   45126 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0915 07:36:43.588485   45126 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0915 07:36:43.588498   45126 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0915 07:36:43.588510   45126 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0915 07:36:43.588517   45126 command_runner.go:130] > # changing them here.
	I0915 07:36:43.588522   45126 command_runner.go:130] > # insecure_registries = [
	I0915 07:36:43.588529   45126 command_runner.go:130] > # ]
	I0915 07:36:43.588540   45126 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0915 07:36:43.588550   45126 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0915 07:36:43.588557   45126 command_runner.go:130] > # image_volumes = "mkdir"
	I0915 07:36:43.588569   45126 command_runner.go:130] > # Temporary directory to use for storing big files
	I0915 07:36:43.588579   45126 command_runner.go:130] > # big_files_temporary_dir = ""
	I0915 07:36:43.588591   45126 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0915 07:36:43.588600   45126 command_runner.go:130] > # CNI plugins.
	I0915 07:36:43.588609   45126 command_runner.go:130] > [crio.network]
	I0915 07:36:43.588619   45126 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0915 07:36:43.588628   45126 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0915 07:36:43.588637   45126 command_runner.go:130] > # cni_default_network = ""
	I0915 07:36:43.588650   45126 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0915 07:36:43.588660   45126 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0915 07:36:43.588675   45126 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0915 07:36:43.588685   45126 command_runner.go:130] > # plugin_dirs = [
	I0915 07:36:43.588694   45126 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0915 07:36:43.588703   45126 command_runner.go:130] > # ]
	I0915 07:36:43.588713   45126 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0915 07:36:43.588719   45126 command_runner.go:130] > [crio.metrics]
	I0915 07:36:43.588727   45126 command_runner.go:130] > # Globally enable or disable metrics support.
	I0915 07:36:43.588736   45126 command_runner.go:130] > enable_metrics = true
	I0915 07:36:43.588746   45126 command_runner.go:130] > # Specify enabled metrics collectors.
	I0915 07:36:43.588754   45126 command_runner.go:130] > # Per default all metrics are enabled.
	I0915 07:36:43.588767   45126 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0915 07:36:43.588780   45126 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0915 07:36:43.588792   45126 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0915 07:36:43.588801   45126 command_runner.go:130] > # metrics_collectors = [
	I0915 07:36:43.588809   45126 command_runner.go:130] > # 	"operations",
	I0915 07:36:43.588817   45126 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0915 07:36:43.588823   45126 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0915 07:36:43.588832   45126 command_runner.go:130] > # 	"operations_errors",
	I0915 07:36:43.588843   45126 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0915 07:36:43.588849   45126 command_runner.go:130] > # 	"image_pulls_by_name",
	I0915 07:36:43.588860   45126 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0915 07:36:43.588870   45126 command_runner.go:130] > # 	"image_pulls_failures",
	I0915 07:36:43.588880   45126 command_runner.go:130] > # 	"image_pulls_successes",
	I0915 07:36:43.588889   45126 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0915 07:36:43.588900   45126 command_runner.go:130] > # 	"image_layer_reuse",
	I0915 07:36:43.588908   45126 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0915 07:36:43.588915   45126 command_runner.go:130] > # 	"containers_oom_total",
	I0915 07:36:43.588920   45126 command_runner.go:130] > # 	"containers_oom",
	I0915 07:36:43.588929   45126 command_runner.go:130] > # 	"processes_defunct",
	I0915 07:36:43.588938   45126 command_runner.go:130] > # 	"operations_total",
	I0915 07:36:43.588946   45126 command_runner.go:130] > # 	"operations_latency_seconds",
	I0915 07:36:43.588957   45126 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0915 07:36:43.588967   45126 command_runner.go:130] > # 	"operations_errors_total",
	I0915 07:36:43.588977   45126 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0915 07:36:43.588994   45126 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0915 07:36:43.589003   45126 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0915 07:36:43.589011   45126 command_runner.go:130] > # 	"image_pulls_success_total",
	I0915 07:36:43.589015   45126 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0915 07:36:43.589025   45126 command_runner.go:130] > # 	"containers_oom_count_total",
	I0915 07:36:43.589043   45126 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0915 07:36:43.589054   45126 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0915 07:36:43.589062   45126 command_runner.go:130] > # ]
	I0915 07:36:43.589070   45126 command_runner.go:130] > # The port on which the metrics server will listen.
	I0915 07:36:43.589079   45126 command_runner.go:130] > # metrics_port = 9090
	I0915 07:36:43.589091   45126 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0915 07:36:43.589102   45126 command_runner.go:130] > # metrics_socket = ""
	I0915 07:36:43.589110   45126 command_runner.go:130] > # The certificate for the secure metrics server.
	I0915 07:36:43.589119   45126 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0915 07:36:43.589132   45126 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0915 07:36:43.589143   45126 command_runner.go:130] > # certificate on any modification event.
	I0915 07:36:43.589153   45126 command_runner.go:130] > # metrics_cert = ""
	I0915 07:36:43.589165   45126 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0915 07:36:43.589175   45126 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0915 07:36:43.589185   45126 command_runner.go:130] > # metrics_key = ""
	I0915 07:36:43.589195   45126 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0915 07:36:43.589202   45126 command_runner.go:130] > [crio.tracing]
	I0915 07:36:43.589210   45126 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0915 07:36:43.589220   45126 command_runner.go:130] > # enable_tracing = false
	I0915 07:36:43.589230   45126 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0915 07:36:43.589240   45126 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0915 07:36:43.589251   45126 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0915 07:36:43.589261   45126 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0915 07:36:43.589268   45126 command_runner.go:130] > # CRI-O NRI configuration.
	I0915 07:36:43.589277   45126 command_runner.go:130] > [crio.nri]
	I0915 07:36:43.589285   45126 command_runner.go:130] > # Globally enable or disable NRI.
	I0915 07:36:43.589292   45126 command_runner.go:130] > # enable_nri = false
	I0915 07:36:43.589297   45126 command_runner.go:130] > # NRI socket to listen on.
	I0915 07:36:43.589308   45126 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0915 07:36:43.589318   45126 command_runner.go:130] > # NRI plugin directory to use.
	I0915 07:36:43.589326   45126 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0915 07:36:43.589337   45126 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0915 07:36:43.589347   45126 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0915 07:36:43.589359   45126 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0915 07:36:43.589368   45126 command_runner.go:130] > # nri_disable_connections = false
	I0915 07:36:43.589379   45126 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0915 07:36:43.589387   45126 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0915 07:36:43.589393   45126 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0915 07:36:43.589402   45126 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0915 07:36:43.589415   45126 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0915 07:36:43.589425   45126 command_runner.go:130] > [crio.stats]
	I0915 07:36:43.589438   45126 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0915 07:36:43.589449   45126 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0915 07:36:43.589458   45126 command_runner.go:130] > # stats_collection_period = 0
	I0915 07:36:43.589546   45126 cni.go:84] Creating CNI manager for ""
	I0915 07:36:43.589560   45126 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0915 07:36:43.589570   45126 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 07:36:43.589597   45126 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.241 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-127008 NodeName:multinode-127008 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 07:36:43.589754   45126 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-127008"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 07:36:43.589837   45126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 07:36:43.600240   45126 command_runner.go:130] > kubeadm
	I0915 07:36:43.600261   45126 command_runner.go:130] > kubectl
	I0915 07:36:43.600267   45126 command_runner.go:130] > kubelet
	I0915 07:36:43.600331   45126 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 07:36:43.600404   45126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 07:36:43.610475   45126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0915 07:36:43.627783   45126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:36:43.645189   45126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0915 07:36:43.662790   45126 ssh_runner.go:195] Run: grep 192.168.39.241	control-plane.minikube.internal$ /etc/hosts
	I0915 07:36:43.666711   45126 command_runner.go:130] > 192.168.39.241	control-plane.minikube.internal
	I0915 07:36:43.666863   45126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:36:43.805470   45126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:36:43.821706   45126 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008 for IP: 192.168.39.241
	I0915 07:36:43.821728   45126 certs.go:194] generating shared ca certs ...
	I0915 07:36:43.821744   45126 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:36:43.821927   45126 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 07:36:43.821980   45126 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 07:36:43.821994   45126 certs.go:256] generating profile certs ...
	I0915 07:36:43.822098   45126 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/client.key
	I0915 07:36:43.822176   45126 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/apiserver.key.e0ebbffb
	I0915 07:36:43.822238   45126 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/proxy-client.key
	I0915 07:36:43.822251   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0915 07:36:43.822271   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0915 07:36:43.822289   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0915 07:36:43.822308   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0915 07:36:43.822323   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0915 07:36:43.822341   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0915 07:36:43.822360   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0915 07:36:43.822378   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0915 07:36:43.822435   45126 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 07:36:43.822481   45126 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 07:36:43.822494   45126 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:36:43.822522   45126 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:36:43.822558   45126 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:36:43.822588   45126 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 07:36:43.822640   45126 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:36:43.822683   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem -> /usr/share/ca-certificates/13190.pem
	I0915 07:36:43.822704   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /usr/share/ca-certificates/131902.pem
	I0915 07:36:43.822724   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:36:43.823374   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:36:43.850059   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:36:43.874963   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:36:43.898575   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 07:36:43.922419   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0915 07:36:43.946750   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 07:36:43.971550   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 07:36:43.995255   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 07:36:44.018731   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 07:36:44.042915   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 07:36:44.066718   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:36:44.090805   45126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 07:36:44.107506   45126 ssh_runner.go:195] Run: openssl version
	I0915 07:36:44.113255   45126 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0915 07:36:44.113540   45126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 07:36:44.125078   45126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 07:36:44.129857   45126 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 07:36:44.129887   45126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 07:36:44.129929   45126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 07:36:44.135665   45126 command_runner.go:130] > 3ec20f2e
	I0915 07:36:44.135732   45126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:36:44.145180   45126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:36:44.156030   45126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:36:44.160579   45126 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:36:44.160741   45126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:36:44.160820   45126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:36:44.166715   45126 command_runner.go:130] > b5213941
	I0915 07:36:44.166771   45126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:36:44.176518   45126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 07:36:44.187515   45126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 07:36:44.192245   45126 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 07:36:44.192270   45126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 07:36:44.192302   45126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 07:36:44.197899   45126 command_runner.go:130] > 51391683
	I0915 07:36:44.197950   45126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 07:36:44.207252   45126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:36:44.211785   45126 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:36:44.211807   45126 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0915 07:36:44.211815   45126 command_runner.go:130] > Device: 253,1	Inode: 531240      Links: 1
	I0915 07:36:44.211825   45126 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0915 07:36:44.211848   45126 command_runner.go:130] > Access: 2024-09-15 07:29:51.813189836 +0000
	I0915 07:36:44.211856   45126 command_runner.go:130] > Modify: 2024-09-15 07:29:51.813189836 +0000
	I0915 07:36:44.211861   45126 command_runner.go:130] > Change: 2024-09-15 07:29:51.813189836 +0000
	I0915 07:36:44.211867   45126 command_runner.go:130] >  Birth: 2024-09-15 07:29:51.813189836 +0000
	I0915 07:36:44.211999   45126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 07:36:44.217476   45126 command_runner.go:130] > Certificate will not expire
	I0915 07:36:44.217684   45126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 07:36:44.223380   45126 command_runner.go:130] > Certificate will not expire
	I0915 07:36:44.223450   45126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 07:36:44.229025   45126 command_runner.go:130] > Certificate will not expire
	I0915 07:36:44.229078   45126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 07:36:44.234739   45126 command_runner.go:130] > Certificate will not expire
	I0915 07:36:44.234929   45126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 07:36:44.240253   45126 command_runner.go:130] > Certificate will not expire
	I0915 07:36:44.240508   45126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 07:36:44.245770   45126 command_runner.go:130] > Certificate will not expire
	I0915 07:36:44.245988   45126 kubeadm.go:392] StartCluster: {Name:multinode-127008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-127008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:36:44.246081   45126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 07:36:44.246116   45126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 07:36:44.281938   45126 command_runner.go:130] > 55950a0433ba081fbfb4cb4a3ac6a434688f2ce4d46c43b8cafaa70bbbdf15b6
	I0915 07:36:44.281960   45126 command_runner.go:130] > ac59c00839a05466aafe55897170f04c23d2e286c86e120536f464faa1bef2b7
	I0915 07:36:44.281966   45126 command_runner.go:130] > a92f5779f5c6326bc828958742aa89ea6fa0a05017c60e8542b49f27e59f2473
	I0915 07:36:44.281972   45126 command_runner.go:130] > 55cc3a66166ca6b1b2346612cbb83130236bdf87c321fa7d3873f51361bd2a01
	I0915 07:36:44.281978   45126 command_runner.go:130] > 63e0b614cde44fb2193d901af66ec3799ef5a9d4f531ee045ac8fccaea9d7ce6
	I0915 07:36:44.281985   45126 command_runner.go:130] > 672943905b036b90e6fbbc428e9e8f99b446eca500d27bcb27118d86dc069e9e
	I0915 07:36:44.281991   45126 command_runner.go:130] > 80fe08f547568769f96d52df19d48c5bc2f22baa9173c1b69bce841801609df7
	I0915 07:36:44.282007   45126 command_runner.go:130] > fd304bb04be08dee1ece6b9daec0e0a3b587079dfff4e56cfbaf506e8083f3fe
	I0915 07:36:44.282017   45126 command_runner.go:130] > 39a551c82457450de7c1c877a7fb4eb1c93fe650148b1fd23b591df91f2ebaa0
	I0915 07:36:44.282040   45126 cri.go:89] found id: "55950a0433ba081fbfb4cb4a3ac6a434688f2ce4d46c43b8cafaa70bbbdf15b6"
	I0915 07:36:44.282052   45126 cri.go:89] found id: "ac59c00839a05466aafe55897170f04c23d2e286c86e120536f464faa1bef2b7"
	I0915 07:36:44.282057   45126 cri.go:89] found id: "a92f5779f5c6326bc828958742aa89ea6fa0a05017c60e8542b49f27e59f2473"
	I0915 07:36:44.282061   45126 cri.go:89] found id: "55cc3a66166ca6b1b2346612cbb83130236bdf87c321fa7d3873f51361bd2a01"
	I0915 07:36:44.282064   45126 cri.go:89] found id: "63e0b614cde44fb2193d901af66ec3799ef5a9d4f531ee045ac8fccaea9d7ce6"
	I0915 07:36:44.282068   45126 cri.go:89] found id: "672943905b036b90e6fbbc428e9e8f99b446eca500d27bcb27118d86dc069e9e"
	I0915 07:36:44.282071   45126 cri.go:89] found id: "80fe08f547568769f96d52df19d48c5bc2f22baa9173c1b69bce841801609df7"
	I0915 07:36:44.282074   45126 cri.go:89] found id: "fd304bb04be08dee1ece6b9daec0e0a3b587079dfff4e56cfbaf506e8083f3fe"
	I0915 07:36:44.282076   45126 cri.go:89] found id: "39a551c82457450de7c1c877a7fb4eb1c93fe650148b1fd23b591df91f2ebaa0"
	I0915 07:36:44.282082   45126 cri.go:89] found id: ""
	I0915 07:36:44.282125   45126 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.302418138Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385911302394851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d9238ee-566d-4c5f-8642-936f7e8deb60 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.302944705Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf148f2c-676c-4c6a-b34b-9b1760c896d0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.302999849Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf148f2c-676c-4c6a-b34b-9b1760c896d0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.303372058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37c2fd09af8f0181d1ea3604c03ec79736d7ead318710eb250ce02f69b9a4c83,PodSandboxId:4aef18e039dfecea913bd72e9cb01f718a234bda8adf8d15cea528bf7b1e008f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726385845143134698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c401cb18134d904ba51c09fdae3067ebe5ea5f0b1f312f7eb9cea3ab923c02b1,PodSandboxId:5586a7fefd58b11d465b192fcaf4a9b4ded14fd2cda739bf04f03728e516c443,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726385811701507031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905abdc62484b035aca2d6076cbfdb8df7d038907df7d81f31bcf7d2aa25f587,PodSandboxId:6cd3f10857e40eb3f7b0a238b8d6bf26b4cb63f73f86169a6f248c4fbcfc7b0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726385811718025162,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0d3de8-3336-4820-b100-436f1
8b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f01d79f94bbd11d2c227050432b502f9528822bb531053e7c84dcff22037b6,PodSandboxId:47be1c612fc77bf63cfed388d59ec387c4bb60d4868c4420f8bb9b5c6852e64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726385811548331382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8350dbed2d0e1e1449148904a84fa5b148acd2edc069a29cf5e44d0d79d3193,PodSandboxId:f8183a3783fad4f63edb442e50c0a975dc478e5f5670ddfb99ae1a269834cc3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726385811469377886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b-feebd1f83d34,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db0b1951a78896466f2770710416685f0959d2509e4844f54eed621bf61c914,PodSandboxId:477e40c71a816a727cfd80b4c5cae7961dbfc025b9b8e5250340b348cfdff29d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726385806641623189,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e199d57146177ff6ced3b5d5b3f612263313b1500a411a40e7342f53e3239d98,PodSandboxId:2535da0e5974604ded97098bbf7f68538f8d7e6e28159b0d421759f577654568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726385806600485267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c69b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07512bfcb800f20ed88f464a40a8bc7847f85a444f8bf9ac3c6efa4936c7cee1,PodSandboxId:e521cda9505b9f96578f12e044aa7bad94754a006d4b8752a7389d39f406d3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726385806606148796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e470f2131890f0fd11a999a522fa58b7343b5b1eaf5220eb071c462ca05ba3de,PodSandboxId:0f7c19e1f862abeecab0b049f2c092908606f0a6afa3fc1698353623e8da72c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726385806544572893,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7dfe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55950a0433ba081fbfb4cb4a3ac6a434688f2ce4d46c43b8cafaa70bbbdf15b6,PodSandboxId:3bf86a80db60ba44a63d8b82f6fa328c55f380885bfeb3a5ebfbb91c0b00176b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726385792958007980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb42e6614b1dc34434df7c6ef272ae815c4e82a1d1a3336d5f2ad81860e364,PodSandboxId:31fa412bfc060d26df0e26abdab1f36377f3d1eb7409726fcf7e0029d5f9b1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726385475816744582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a92f5779f5c6326bc828958742aa89ea6fa0a05017c60e8542b49f27e59f2473,PodSandboxId:fa08f2e1ecce819a899d69e83bdde2cdd942c474b3b6f2ccf6671b180bf6d49b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726385418364291532,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cc3a66166ca6b1b2346612cbb83130236bdf87c321fa7d3873f51361bd2a01,PodSandboxId:1fe92302cde426d3ab2b0fa0ed0d76907b9f0e8ad6e6ee5270c5423823417c29,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726385406483713292,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bb0d3de8-3336-4820-b100-436f18b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e0b614cde44fb2193d901af66ec3799ef5a9d4f531ee045ac8fccaea9d7ce6,PodSandboxId:e8bd420d8e45e36db607f0a49fb9735ddd7f9b648788639ebee39da47a8f9761,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726385406269655972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b
-feebd1f83d34,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672943905b036b90e6fbbc428e9e8f99b446eca500d27bcb27118d86dc069e9e,PodSandboxId:3fe28e7fa0bc1593a88b75a1ca0aab3fc9a8510289dd8cf499233921d76b541d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726385395508565004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7d
fe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd304bb04be08dee1ece6b9daec0e0a3b587079dfff4e56cfbaf506e8083f3fe,PodSandboxId:4051b763c60f6f8efe20854c7fd2da62d852f5434ce440f88cd7ba8c8082cba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726385395501235325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c6
9b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fe08f547568769f96d52df19d48c5bc2f22baa9173c1b69bce841801609df7,PodSandboxId:1706a91c6cc99bda49accd3428e7f61a966e864beac7f9fd296fc6e5201d53e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726385395506397104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a551c82457450de7c1c877a7fb4eb1c93fe650148b1fd23b591df91f2ebaa0,PodSandboxId:b9d703577515d64b5fc6ca9667cf8407a1253f866e24decda285378f0016a62c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726385395433160552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf148f2c-676c-4c6a-b34b-9b1760c896d0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.347869085Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0734bcc0-b2e8-411e-b8ff-bb379c45965b name=/runtime.v1.RuntimeService/Version
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.347942918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0734bcc0-b2e8-411e-b8ff-bb379c45965b name=/runtime.v1.RuntimeService/Version
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.349156094Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff757962-cd9e-49d3-a7e4-a96f62c74a45 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.349638269Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385911349614056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff757962-cd9e-49d3-a7e4-a96f62c74a45 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.350145252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edd77976-cb47-412b-a763-1563a173295d name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.350247146Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edd77976-cb47-412b-a763-1563a173295d name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.350652775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37c2fd09af8f0181d1ea3604c03ec79736d7ead318710eb250ce02f69b9a4c83,PodSandboxId:4aef18e039dfecea913bd72e9cb01f718a234bda8adf8d15cea528bf7b1e008f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726385845143134698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c401cb18134d904ba51c09fdae3067ebe5ea5f0b1f312f7eb9cea3ab923c02b1,PodSandboxId:5586a7fefd58b11d465b192fcaf4a9b4ded14fd2cda739bf04f03728e516c443,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726385811701507031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905abdc62484b035aca2d6076cbfdb8df7d038907df7d81f31bcf7d2aa25f587,PodSandboxId:6cd3f10857e40eb3f7b0a238b8d6bf26b4cb63f73f86169a6f248c4fbcfc7b0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726385811718025162,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0d3de8-3336-4820-b100-436f1
8b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f01d79f94bbd11d2c227050432b502f9528822bb531053e7c84dcff22037b6,PodSandboxId:47be1c612fc77bf63cfed388d59ec387c4bb60d4868c4420f8bb9b5c6852e64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726385811548331382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8350dbed2d0e1e1449148904a84fa5b148acd2edc069a29cf5e44d0d79d3193,PodSandboxId:f8183a3783fad4f63edb442e50c0a975dc478e5f5670ddfb99ae1a269834cc3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726385811469377886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b-feebd1f83d34,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db0b1951a78896466f2770710416685f0959d2509e4844f54eed621bf61c914,PodSandboxId:477e40c71a816a727cfd80b4c5cae7961dbfc025b9b8e5250340b348cfdff29d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726385806641623189,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e199d57146177ff6ced3b5d5b3f612263313b1500a411a40e7342f53e3239d98,PodSandboxId:2535da0e5974604ded97098bbf7f68538f8d7e6e28159b0d421759f577654568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726385806600485267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c69b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07512bfcb800f20ed88f464a40a8bc7847f85a444f8bf9ac3c6efa4936c7cee1,PodSandboxId:e521cda9505b9f96578f12e044aa7bad94754a006d4b8752a7389d39f406d3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726385806606148796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e470f2131890f0fd11a999a522fa58b7343b5b1eaf5220eb071c462ca05ba3de,PodSandboxId:0f7c19e1f862abeecab0b049f2c092908606f0a6afa3fc1698353623e8da72c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726385806544572893,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7dfe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55950a0433ba081fbfb4cb4a3ac6a434688f2ce4d46c43b8cafaa70bbbdf15b6,PodSandboxId:3bf86a80db60ba44a63d8b82f6fa328c55f380885bfeb3a5ebfbb91c0b00176b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726385792958007980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb42e6614b1dc34434df7c6ef272ae815c4e82a1d1a3336d5f2ad81860e364,PodSandboxId:31fa412bfc060d26df0e26abdab1f36377f3d1eb7409726fcf7e0029d5f9b1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726385475816744582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a92f5779f5c6326bc828958742aa89ea6fa0a05017c60e8542b49f27e59f2473,PodSandboxId:fa08f2e1ecce819a899d69e83bdde2cdd942c474b3b6f2ccf6671b180bf6d49b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726385418364291532,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cc3a66166ca6b1b2346612cbb83130236bdf87c321fa7d3873f51361bd2a01,PodSandboxId:1fe92302cde426d3ab2b0fa0ed0d76907b9f0e8ad6e6ee5270c5423823417c29,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726385406483713292,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bb0d3de8-3336-4820-b100-436f18b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e0b614cde44fb2193d901af66ec3799ef5a9d4f531ee045ac8fccaea9d7ce6,PodSandboxId:e8bd420d8e45e36db607f0a49fb9735ddd7f9b648788639ebee39da47a8f9761,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726385406269655972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b
-feebd1f83d34,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672943905b036b90e6fbbc428e9e8f99b446eca500d27bcb27118d86dc069e9e,PodSandboxId:3fe28e7fa0bc1593a88b75a1ca0aab3fc9a8510289dd8cf499233921d76b541d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726385395508565004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7d
fe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd304bb04be08dee1ece6b9daec0e0a3b587079dfff4e56cfbaf506e8083f3fe,PodSandboxId:4051b763c60f6f8efe20854c7fd2da62d852f5434ce440f88cd7ba8c8082cba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726385395501235325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c6
9b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fe08f547568769f96d52df19d48c5bc2f22baa9173c1b69bce841801609df7,PodSandboxId:1706a91c6cc99bda49accd3428e7f61a966e864beac7f9fd296fc6e5201d53e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726385395506397104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a551c82457450de7c1c877a7fb4eb1c93fe650148b1fd23b591df91f2ebaa0,PodSandboxId:b9d703577515d64b5fc6ca9667cf8407a1253f866e24decda285378f0016a62c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726385395433160552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=edd77976-cb47-412b-a763-1563a173295d name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.394456244Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b337fcc-0a6a-4867-9a78-e4eb2826f3bf name=/runtime.v1.RuntimeService/Version
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.394531426Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b337fcc-0a6a-4867-9a78-e4eb2826f3bf name=/runtime.v1.RuntimeService/Version
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.396025225Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4bac7979-1146-4c64-a5a2-3dfb62c672e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.396459118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385911396434737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4bac7979-1146-4c64-a5a2-3dfb62c672e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.397040685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e829c74-b8ba-4e39-aad6-a5e1513e6afb name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.397262205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e829c74-b8ba-4e39-aad6-a5e1513e6afb name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.398599832Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37c2fd09af8f0181d1ea3604c03ec79736d7ead318710eb250ce02f69b9a4c83,PodSandboxId:4aef18e039dfecea913bd72e9cb01f718a234bda8adf8d15cea528bf7b1e008f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726385845143134698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c401cb18134d904ba51c09fdae3067ebe5ea5f0b1f312f7eb9cea3ab923c02b1,PodSandboxId:5586a7fefd58b11d465b192fcaf4a9b4ded14fd2cda739bf04f03728e516c443,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726385811701507031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905abdc62484b035aca2d6076cbfdb8df7d038907df7d81f31bcf7d2aa25f587,PodSandboxId:6cd3f10857e40eb3f7b0a238b8d6bf26b4cb63f73f86169a6f248c4fbcfc7b0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726385811718025162,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0d3de8-3336-4820-b100-436f1
8b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f01d79f94bbd11d2c227050432b502f9528822bb531053e7c84dcff22037b6,PodSandboxId:47be1c612fc77bf63cfed388d59ec387c4bb60d4868c4420f8bb9b5c6852e64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726385811548331382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8350dbed2d0e1e1449148904a84fa5b148acd2edc069a29cf5e44d0d79d3193,PodSandboxId:f8183a3783fad4f63edb442e50c0a975dc478e5f5670ddfb99ae1a269834cc3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726385811469377886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b-feebd1f83d34,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db0b1951a78896466f2770710416685f0959d2509e4844f54eed621bf61c914,PodSandboxId:477e40c71a816a727cfd80b4c5cae7961dbfc025b9b8e5250340b348cfdff29d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726385806641623189,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e199d57146177ff6ced3b5d5b3f612263313b1500a411a40e7342f53e3239d98,PodSandboxId:2535da0e5974604ded97098bbf7f68538f8d7e6e28159b0d421759f577654568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726385806600485267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c69b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07512bfcb800f20ed88f464a40a8bc7847f85a444f8bf9ac3c6efa4936c7cee1,PodSandboxId:e521cda9505b9f96578f12e044aa7bad94754a006d4b8752a7389d39f406d3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726385806606148796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e470f2131890f0fd11a999a522fa58b7343b5b1eaf5220eb071c462ca05ba3de,PodSandboxId:0f7c19e1f862abeecab0b049f2c092908606f0a6afa3fc1698353623e8da72c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726385806544572893,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7dfe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55950a0433ba081fbfb4cb4a3ac6a434688f2ce4d46c43b8cafaa70bbbdf15b6,PodSandboxId:3bf86a80db60ba44a63d8b82f6fa328c55f380885bfeb3a5ebfbb91c0b00176b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726385792958007980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb42e6614b1dc34434df7c6ef272ae815c4e82a1d1a3336d5f2ad81860e364,PodSandboxId:31fa412bfc060d26df0e26abdab1f36377f3d1eb7409726fcf7e0029d5f9b1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726385475816744582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a92f5779f5c6326bc828958742aa89ea6fa0a05017c60e8542b49f27e59f2473,PodSandboxId:fa08f2e1ecce819a899d69e83bdde2cdd942c474b3b6f2ccf6671b180bf6d49b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726385418364291532,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cc3a66166ca6b1b2346612cbb83130236bdf87c321fa7d3873f51361bd2a01,PodSandboxId:1fe92302cde426d3ab2b0fa0ed0d76907b9f0e8ad6e6ee5270c5423823417c29,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726385406483713292,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bb0d3de8-3336-4820-b100-436f18b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e0b614cde44fb2193d901af66ec3799ef5a9d4f531ee045ac8fccaea9d7ce6,PodSandboxId:e8bd420d8e45e36db607f0a49fb9735ddd7f9b648788639ebee39da47a8f9761,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726385406269655972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b
-feebd1f83d34,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672943905b036b90e6fbbc428e9e8f99b446eca500d27bcb27118d86dc069e9e,PodSandboxId:3fe28e7fa0bc1593a88b75a1ca0aab3fc9a8510289dd8cf499233921d76b541d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726385395508565004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7d
fe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd304bb04be08dee1ece6b9daec0e0a3b587079dfff4e56cfbaf506e8083f3fe,PodSandboxId:4051b763c60f6f8efe20854c7fd2da62d852f5434ce440f88cd7ba8c8082cba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726385395501235325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c6
9b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fe08f547568769f96d52df19d48c5bc2f22baa9173c1b69bce841801609df7,PodSandboxId:1706a91c6cc99bda49accd3428e7f61a966e864beac7f9fd296fc6e5201d53e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726385395506397104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a551c82457450de7c1c877a7fb4eb1c93fe650148b1fd23b591df91f2ebaa0,PodSandboxId:b9d703577515d64b5fc6ca9667cf8407a1253f866e24decda285378f0016a62c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726385395433160552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e829c74-b8ba-4e39-aad6-a5e1513e6afb name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.441028797Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63854af4-a2bf-47d6-8a2b-6eda2a606a14 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.441104411Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63854af4-a2bf-47d6-8a2b-6eda2a606a14 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.442259987Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff96224e-f173-45ca-8db4-cffe5ddee892 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.442671706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385911442642202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff96224e-f173-45ca-8db4-cffe5ddee892 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.443355598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c230c57-d6c7-4d68-bba3-3b7ff7696116 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.443411221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c230c57-d6c7-4d68-bba3-3b7ff7696116 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:38:31 multinode-127008 crio[2818]: time="2024-09-15 07:38:31.443744072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37c2fd09af8f0181d1ea3604c03ec79736d7ead318710eb250ce02f69b9a4c83,PodSandboxId:4aef18e039dfecea913bd72e9cb01f718a234bda8adf8d15cea528bf7b1e008f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726385845143134698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c401cb18134d904ba51c09fdae3067ebe5ea5f0b1f312f7eb9cea3ab923c02b1,PodSandboxId:5586a7fefd58b11d465b192fcaf4a9b4ded14fd2cda739bf04f03728e516c443,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726385811701507031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905abdc62484b035aca2d6076cbfdb8df7d038907df7d81f31bcf7d2aa25f587,PodSandboxId:6cd3f10857e40eb3f7b0a238b8d6bf26b4cb63f73f86169a6f248c4fbcfc7b0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726385811718025162,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0d3de8-3336-4820-b100-436f1
8b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f01d79f94bbd11d2c227050432b502f9528822bb531053e7c84dcff22037b6,PodSandboxId:47be1c612fc77bf63cfed388d59ec387c4bb60d4868c4420f8bb9b5c6852e64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726385811548331382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8350dbed2d0e1e1449148904a84fa5b148acd2edc069a29cf5e44d0d79d3193,PodSandboxId:f8183a3783fad4f63edb442e50c0a975dc478e5f5670ddfb99ae1a269834cc3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726385811469377886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b-feebd1f83d34,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db0b1951a78896466f2770710416685f0959d2509e4844f54eed621bf61c914,PodSandboxId:477e40c71a816a727cfd80b4c5cae7961dbfc025b9b8e5250340b348cfdff29d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726385806641623189,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e199d57146177ff6ced3b5d5b3f612263313b1500a411a40e7342f53e3239d98,PodSandboxId:2535da0e5974604ded97098bbf7f68538f8d7e6e28159b0d421759f577654568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726385806600485267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c69b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07512bfcb800f20ed88f464a40a8bc7847f85a444f8bf9ac3c6efa4936c7cee1,PodSandboxId:e521cda9505b9f96578f12e044aa7bad94754a006d4b8752a7389d39f406d3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726385806606148796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e470f2131890f0fd11a999a522fa58b7343b5b1eaf5220eb071c462ca05ba3de,PodSandboxId:0f7c19e1f862abeecab0b049f2c092908606f0a6afa3fc1698353623e8da72c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726385806544572893,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7dfe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55950a0433ba081fbfb4cb4a3ac6a434688f2ce4d46c43b8cafaa70bbbdf15b6,PodSandboxId:3bf86a80db60ba44a63d8b82f6fa328c55f380885bfeb3a5ebfbb91c0b00176b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726385792958007980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb42e6614b1dc34434df7c6ef272ae815c4e82a1d1a3336d5f2ad81860e364,PodSandboxId:31fa412bfc060d26df0e26abdab1f36377f3d1eb7409726fcf7e0029d5f9b1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726385475816744582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a92f5779f5c6326bc828958742aa89ea6fa0a05017c60e8542b49f27e59f2473,PodSandboxId:fa08f2e1ecce819a899d69e83bdde2cdd942c474b3b6f2ccf6671b180bf6d49b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726385418364291532,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cc3a66166ca6b1b2346612cbb83130236bdf87c321fa7d3873f51361bd2a01,PodSandboxId:1fe92302cde426d3ab2b0fa0ed0d76907b9f0e8ad6e6ee5270c5423823417c29,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726385406483713292,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bb0d3de8-3336-4820-b100-436f18b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e0b614cde44fb2193d901af66ec3799ef5a9d4f531ee045ac8fccaea9d7ce6,PodSandboxId:e8bd420d8e45e36db607f0a49fb9735ddd7f9b648788639ebee39da47a8f9761,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726385406269655972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b
-feebd1f83d34,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672943905b036b90e6fbbc428e9e8f99b446eca500d27bcb27118d86dc069e9e,PodSandboxId:3fe28e7fa0bc1593a88b75a1ca0aab3fc9a8510289dd8cf499233921d76b541d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726385395508565004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7d
fe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd304bb04be08dee1ece6b9daec0e0a3b587079dfff4e56cfbaf506e8083f3fe,PodSandboxId:4051b763c60f6f8efe20854c7fd2da62d852f5434ce440f88cd7ba8c8082cba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726385395501235325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c6
9b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fe08f547568769f96d52df19d48c5bc2f22baa9173c1b69bce841801609df7,PodSandboxId:1706a91c6cc99bda49accd3428e7f61a966e864beac7f9fd296fc6e5201d53e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726385395506397104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a551c82457450de7c1c877a7fb4eb1c93fe650148b1fd23b591df91f2ebaa0,PodSandboxId:b9d703577515d64b5fc6ca9667cf8407a1253f866e24decda285378f0016a62c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726385395433160552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c230c57-d6c7-4d68-bba3-3b7ff7696116 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	37c2fd09af8f0       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   4aef18e039dfe       busybox-7dff88458-zzxt7
	905abdc62484b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   6cd3f10857e40       kindnet-jxp4h
	c401cb18134d9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   2                   5586a7fefd58b       coredns-7c65d6cfc9-q9c49
	77f01d79f94bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   47be1c612fc77       storage-provisioner
	d8350dbed2d0e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   f8183a3783fad       kube-proxy-57hqd
	0db0b1951a788       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   477e40c71a816       kube-apiserver-multinode-127008
	07512bfcb800f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   e521cda9505b9       etcd-multinode-127008
	e199d57146177       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   2535da0e59746       kube-scheduler-multinode-127008
	e470f2131890f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   0f7c19e1f862a       kube-controller-manager-multinode-127008
	55950a0433ba0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Exited              coredns                   1                   3bf86a80db60b       coredns-7c65d6cfc9-q9c49
	7deb42e6614b1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   31fa412bfc060       busybox-7dff88458-zzxt7
	a92f5779f5c63       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   fa08f2e1ecce8       storage-provisioner
	55cc3a66166ca       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   1fe92302cde42       kindnet-jxp4h
	63e0b614cde44       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   e8bd420d8e45e       kube-proxy-57hqd
	672943905b036       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   3fe28e7fa0bc1       kube-controller-manager-multinode-127008
	80fe08f547568       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   1706a91c6cc99       etcd-multinode-127008
	fd304bb04be08       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   4051b763c60f6       kube-scheduler-multinode-127008
	39a551c824574       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   b9d703577515d       kube-apiserver-multinode-127008
	
	
	==> coredns [55950a0433ba081fbfb4cb4a3ac6a434688f2ce4d46c43b8cafaa70bbbdf15b6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43639 - 51318 "HINFO IN 6237801041562186729.5309758183064330623. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013815683s
	
	
	==> coredns [c401cb18134d904ba51c09fdae3067ebe5ea5f0b1f312f7eb9cea3ab923c02b1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58976 - 62626 "HINFO IN 4417880172304727251.3385481419910881983. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015929217s
	
	
	==> describe nodes <==
	Name:               multinode-127008
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-127008
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=multinode-127008
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T07_30_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:29:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-127008
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:38:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:36:50 +0000   Sun, 15 Sep 2024 07:29:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:36:50 +0000   Sun, 15 Sep 2024 07:29:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:36:50 +0000   Sun, 15 Sep 2024 07:29:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:36:50 +0000   Sun, 15 Sep 2024 07:30:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    multinode-127008
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6ebf40788f64241adea960784298779
	  System UUID:                c6ebf407-88f6-4241-adea-960784298779
	  Boot ID:                    1c986149-b3d5-42c8-a740-7cb144f5b0b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zzxt7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 coredns-7c65d6cfc9-q9c49                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m26s
	  kube-system                 etcd-multinode-127008                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m31s
	  kube-system                 kindnet-jxp4h                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m26s
	  kube-system                 kube-apiserver-multinode-127008             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-controller-manager-multinode-127008    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-proxy-57hqd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-scheduler-multinode-127008             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m24s                kube-proxy       
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m31s                kubelet          Node multinode-127008 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m31s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m31s                kubelet          Node multinode-127008 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m31s                kubelet          Node multinode-127008 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m31s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m27s                node-controller  Node multinode-127008 event: Registered Node multinode-127008 in Controller
	  Normal  NodeReady                8m14s                kubelet          Node multinode-127008 status is now: NodeReady
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  105s (x8 over 106s)  kubelet          Node multinode-127008 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 106s)  kubelet          Node multinode-127008 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 106s)  kubelet          Node multinode-127008 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           98s                  node-controller  Node multinode-127008 event: Registered Node multinode-127008 in Controller
	
	
	Name:               multinode-127008-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-127008-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=multinode-127008
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_37_30_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:37:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-127008-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:38:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:38:00 +0000   Sun, 15 Sep 2024 07:37:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:38:00 +0000   Sun, 15 Sep 2024 07:37:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:38:00 +0000   Sun, 15 Sep 2024 07:37:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:38:00 +0000   Sun, 15 Sep 2024 07:37:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    multinode-127008-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 401031e125c841a58c988df979495fee
	  System UUID:                401031e1-25c8-41a5-8c98-8df979495fee
	  Boot ID:                    c06de3e8-aa53-45c7-b3f2-db5a8c15b6cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-96v48    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kindnet-xvllr              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m42s
	  kube-system                 kube-proxy-q96bk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  Starting                 7m37s                  kube-proxy  
	  Normal  Starting                 7m43s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m42s (x2 over 7m43s)  kubelet     Node multinode-127008-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m42s (x2 over 7m43s)  kubelet     Node multinode-127008-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m42s (x2 over 7m43s)  kubelet     Node multinode-127008-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m22s                  kubelet     Node multinode-127008-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    61s (x2 over 62s)      kubelet     Node multinode-127008-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 62s)      kubelet     Node multinode-127008-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  61s (x2 over 62s)      kubelet     Node multinode-127008-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                42s                    kubelet     Node multinode-127008-m02 status is now: NodeReady
	
	
	Name:               multinode-127008-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-127008-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=multinode-127008
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_38_10_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:38:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-127008-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:38:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:38:28 +0000   Sun, 15 Sep 2024 07:38:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:38:28 +0000   Sun, 15 Sep 2024 07:38:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:38:28 +0000   Sun, 15 Sep 2024 07:38:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:38:28 +0000   Sun, 15 Sep 2024 07:38:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    multinode-127008-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0afe66d32f214851ab0c9867a14d5a2c
	  System UUID:                0afe66d3-2f21-4851-ab0c-9867a14d5a2c
	  Boot ID:                    0168a4ef-ea78-4087-90a2-57e1a6dc24f2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-d2r9v       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m44s
	  kube-system                 kube-proxy-lsd2q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m39s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m50s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m44s (x2 over 6m45s)  kubelet     Node multinode-127008-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m44s (x2 over 6m45s)  kubelet     Node multinode-127008-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m44s (x2 over 6m45s)  kubelet     Node multinode-127008-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m44s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m24s                  kubelet     Node multinode-127008-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m55s (x2 over 5m55s)  kubelet     Node multinode-127008-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m55s (x2 over 5m55s)  kubelet     Node multinode-127008-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m55s (x2 over 5m55s)  kubelet     Node multinode-127008-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m35s                  kubelet     Node multinode-127008-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-127008-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-127008-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-127008-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-127008-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.045927] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.187919] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.109009] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.283580] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.924438] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +4.080489] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.057265] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.986358] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[  +0.085839] kauditd_printk_skb: 69 callbacks suppressed
	[Sep15 07:30] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.127740] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.725526] kauditd_printk_skb: 60 callbacks suppressed
	[Sep15 07:31] kauditd_printk_skb: 14 callbacks suppressed
	[Sep15 07:36] systemd-fstab-generator[2742]: Ignoring "noauto" option for root device
	[  +0.150330] systemd-fstab-generator[2755]: Ignoring "noauto" option for root device
	[  +0.176673] systemd-fstab-generator[2769]: Ignoring "noauto" option for root device
	[  +0.136174] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +0.282199] systemd-fstab-generator[2810]: Ignoring "noauto" option for root device
	[  +9.745607] systemd-fstab-generator[2925]: Ignoring "noauto" option for root device
	[  +0.082633] kauditd_printk_skb: 110 callbacks suppressed
	[  +1.841229] systemd-fstab-generator[3045]: Ignoring "noauto" option for root device
	[  +5.740788] kauditd_printk_skb: 76 callbacks suppressed
	[Sep15 07:37] systemd-fstab-generator[3891]: Ignoring "noauto" option for root device
	[  +0.096539] kauditd_printk_skb: 36 callbacks suppressed
	[ +19.497788] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [07512bfcb800f20ed88f464a40a8bc7847f85a444f8bf9ac3c6efa4936c7cee1] <==
	{"level":"info","ts":"2024-09-15T07:36:46.941104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 switched to configuration voters=(9516406204709898018)"}
	{"level":"info","ts":"2024-09-15T07:36:46.942746Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"73137fd659599d","local-member-id":"84111105ea0e8722","added-peer-id":"84111105ea0e8722","added-peer-peer-urls":["https://192.168.39.241:2380"]}
	{"level":"info","ts":"2024-09-15T07:36:46.942890Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"73137fd659599d","local-member-id":"84111105ea0e8722","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T07:36:46.942939Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T07:36:46.945840Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-15T07:36:46.946067Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"84111105ea0e8722","initial-advertise-peer-urls":["https://192.168.39.241:2380"],"listen-peer-urls":["https://192.168.39.241:2380"],"advertise-client-urls":["https://192.168.39.241:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.241:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-15T07:36:46.946115Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-15T07:36:46.948389Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.241:2380"}
	{"level":"info","ts":"2024-09-15T07:36:46.948421Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.241:2380"}
	{"level":"info","ts":"2024-09-15T07:36:48.711584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-15T07:36:48.711659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-15T07:36:48.711690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 received MsgPreVoteResp from 84111105ea0e8722 at term 2"}
	{"level":"info","ts":"2024-09-15T07:36:48.711706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 became candidate at term 3"}
	{"level":"info","ts":"2024-09-15T07:36:48.711711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 received MsgVoteResp from 84111105ea0e8722 at term 3"}
	{"level":"info","ts":"2024-09-15T07:36:48.711719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 became leader at term 3"}
	{"level":"info","ts":"2024-09-15T07:36:48.711726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 84111105ea0e8722 elected leader 84111105ea0e8722 at term 3"}
	{"level":"info","ts":"2024-09-15T07:36:48.715032Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T07:36:48.716029Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T07:36:48.714981Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"84111105ea0e8722","local-member-attributes":"{Name:multinode-127008 ClientURLs:[https://192.168.39.241:2379]}","request-path":"/0/members/84111105ea0e8722/attributes","cluster-id":"73137fd659599d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T07:36:48.716610Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T07:36:48.716830Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T07:36:48.716844Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T07:36:48.716966Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.241:2379"}
	{"level":"info","ts":"2024-09-15T07:36:48.717470Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T07:36:48.718267Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [80fe08f547568769f96d52df19d48c5bc2f22baa9173c1b69bce841801609df7] <==
	{"level":"info","ts":"2024-09-15T07:29:56.622272Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T07:29:56.623071Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T07:29:56.627592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.241:2379"}
	{"level":"info","ts":"2024-09-15T07:29:56.628375Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-15T07:30:49.133941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.379106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-127008-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T07:30:49.134303Z","caller":"traceutil/trace.go:171","msg":"trace[459317188] range","detail":"{range_begin:/registry/minions/multinode-127008-m02; range_end:; response_count:0; response_revision:437; }","duration":"132.796039ms","start":"2024-09-15T07:30:49.001490Z","end":"2024-09-15T07:30:49.134286Z","steps":["trace[459317188] 'range keys from in-memory index tree'  (duration: 132.227089ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T07:30:49.134142Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.692155ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T07:30:49.134991Z","caller":"traceutil/trace.go:171","msg":"trace[1520228515] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:437; }","duration":"202.555114ms","start":"2024-09-15T07:30:48.932425Z","end":"2024-09-15T07:30:49.134980Z","steps":["trace[1520228515] 'range keys from in-memory index tree'  (duration: 201.686275ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:31:47.189638Z","caller":"traceutil/trace.go:171","msg":"trace[1231691396] linearizableReadLoop","detail":"{readStateIndex:608; appliedIndex:607; }","duration":"213.965993ms","start":"2024-09-15T07:31:46.975646Z","end":"2024-09-15T07:31:47.189612Z","steps":["trace[1231691396] 'read index received'  (duration: 28.446598ms)","trace[1231691396] 'applied index is now lower than readState.Index'  (duration: 185.518853ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T07:31:47.189999Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.330151ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-127008-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T07:31:47.190336Z","caller":"traceutil/trace.go:171","msg":"trace[369651290] range","detail":"{range_begin:/registry/minions/multinode-127008-m03; range_end:; response_count:0; response_revision:576; }","duration":"214.700516ms","start":"2024-09-15T07:31:46.975626Z","end":"2024-09-15T07:31:47.190327Z","steps":["trace[369651290] 'agreement among raft nodes before linearized reading'  (duration: 214.279624ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:31:47.190053Z","caller":"traceutil/trace.go:171","msg":"trace[467158830] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"236.331479ms","start":"2024-09-15T07:31:46.953707Z","end":"2024-09-15T07:31:47.190039Z","steps":["trace[467158830] 'process raft request'  (duration: 175.001843ms)","trace[467158830] 'compare'  (duration: 60.777874ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T07:31:47.190291Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.676262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T07:31:47.191911Z","caller":"traceutil/trace.go:171","msg":"trace[1858409489] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:576; }","duration":"179.291027ms","start":"2024-09-15T07:31:47.012601Z","end":"2024-09-15T07:31:47.191892Z","steps":["trace[1858409489] 'agreement among raft nodes before linearized reading'  (duration: 177.656652ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:31:53.246331Z","caller":"traceutil/trace.go:171","msg":"trace[1745847829] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"213.154006ms","start":"2024-09-15T07:31:53.033163Z","end":"2024-09-15T07:31:53.246317Z","steps":["trace[1745847829] 'process raft request'  (duration: 212.992867ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:35:01.833260Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-15T07:35:01.833406Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-127008","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.241:2380"],"advertise-client-urls":["https://192.168.39.241:2379"]}
	{"level":"warn","ts":"2024-09-15T07:35:01.833581Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T07:35:01.833738Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T07:35:01.881472Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.241:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T07:35:01.881537Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.241:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-15T07:35:01.884355Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"84111105ea0e8722","current-leader-member-id":"84111105ea0e8722"}
	{"level":"info","ts":"2024-09-15T07:35:01.888283Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.241:2380"}
	{"level":"info","ts":"2024-09-15T07:35:01.888521Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.241:2380"}
	{"level":"info","ts":"2024-09-15T07:35:01.888560Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-127008","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.241:2380"],"advertise-client-urls":["https://192.168.39.241:2379"]}
	
	
	==> kernel <==
	 07:38:31 up 9 min,  0 users,  load average: 0.14, 0.20, 0.10
	Linux multinode-127008 5.10.207 #1 SMP Sun Sep 15 04:48:27 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [55cc3a66166ca6b1b2346612cbb83130236bdf87c321fa7d3873f51361bd2a01] <==
	I0915 07:34:17.466714       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.3.0/24] 
	I0915 07:34:27.473931       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:34:27.474275       1 main.go:299] handling current node
	I0915 07:34:27.474350       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:34:27.474377       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:34:27.474582       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0915 07:34:27.474604       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.3.0/24] 
	I0915 07:34:37.466510       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:34:37.466568       1 main.go:299] handling current node
	I0915 07:34:37.466585       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:34:37.466591       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:34:37.466737       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0915 07:34:37.466760       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.3.0/24] 
	I0915 07:34:47.474018       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:34:47.474121       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:34:47.474302       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0915 07:34:47.474333       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.3.0/24] 
	I0915 07:34:47.474407       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:34:47.474428       1 main.go:299] handling current node
	I0915 07:34:57.465698       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:34:57.465745       1 main.go:299] handling current node
	I0915 07:34:57.465759       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:34:57.465764       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:34:57.465899       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0915 07:34:57.465924       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [905abdc62484b035aca2d6076cbfdb8df7d038907df7d81f31bcf7d2aa25f587] <==
	I0915 07:37:42.669534       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.3.0/24] 
	I0915 07:37:52.669489       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:37:52.669600       1 main.go:299] handling current node
	I0915 07:37:52.669628       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:37:52.669646       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:37:52.669794       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0915 07:37:52.669822       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.3.0/24] 
	I0915 07:38:02.670928       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:38:02.671105       1 main.go:299] handling current node
	I0915 07:38:02.671143       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:38:02.671162       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:38:02.671433       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0915 07:38:02.671469       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.3.0/24] 
	I0915 07:38:12.669946       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:38:12.670023       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:38:12.670262       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0915 07:38:12.670291       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.2.0/24] 
	I0915 07:38:12.670386       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:38:12.670410       1 main.go:299] handling current node
	I0915 07:38:22.669263       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:38:22.669532       1 main.go:299] handling current node
	I0915 07:38:22.669611       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:38:22.669647       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:38:22.669884       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0915 07:38:22.669929       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0db0b1951a78896466f2770710416685f0959d2509e4844f54eed621bf61c914] <==
	I0915 07:36:50.164917       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0915 07:36:50.164943       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0915 07:36:50.164953       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0915 07:36:50.167502       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0915 07:36:50.170311       1 shared_informer.go:320] Caches are synced for configmaps
	I0915 07:36:50.170537       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0915 07:36:50.171237       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0915 07:36:50.177255       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0915 07:36:50.177366       1 aggregator.go:171] initial CRD sync complete...
	I0915 07:36:50.177395       1 autoregister_controller.go:144] Starting autoregister controller
	I0915 07:36:50.177422       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0915 07:36:50.177444       1 cache.go:39] Caches are synced for autoregister controller
	E0915 07:36:50.179523       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0915 07:36:50.194249       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0915 07:36:50.206993       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0915 07:36:50.207071       1 policy_source.go:224] refreshing policies
	I0915 07:36:50.212586       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 07:36:50.977068       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0915 07:36:52.309490       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0915 07:36:52.454733       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0915 07:36:52.478874       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0915 07:36:52.585063       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0915 07:36:52.593887       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0915 07:36:53.578734       1 controller.go:615] quota admission added evaluator for: endpoints
	I0915 07:36:53.723515       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [39a551c82457450de7c1c877a7fb4eb1c93fe650148b1fd23b591df91f2ebaa0] <==
	W0915 07:35:01.861523       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.861564       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.861602       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.861638       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.861877       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.861953       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.861983       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.862018       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.862063       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.862102       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.862138       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.864991       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865028       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865058       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865101       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865146       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865682       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865718       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865750       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865781       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865817       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865923       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.866061       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.866097       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.866232       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [672943905b036b90e6fbbc428e9e8f99b446eca500d27bcb27118d86dc069e9e] <==
	I0915 07:32:35.573393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:35.573469       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127008-m02"
	I0915 07:32:36.617454       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-127008-m03\" does not exist"
	I0915 07:32:36.617877       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127008-m02"
	I0915 07:32:36.636792       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-127008-m03" podCIDRs=["10.244.3.0/24"]
	I0915 07:32:36.636898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:36.636930       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:36.640673       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:37.114534       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:37.438467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:39.806798       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:46.853770       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:56.368855       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:56.369334       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127008-m02"
	I0915 07:32:56.380175       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:59.763389       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:33:34.779797       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m02"
	I0915 07:33:34.780864       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127008-m03"
	I0915 07:33:34.802059       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m02"
	I0915 07:33:34.807273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.496352ms"
	I0915 07:33:34.807738       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.976µs"
	I0915 07:33:39.834412       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:33:39.853976       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:33:39.882412       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m02"
	I0915 07:33:49.957372       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	
	
	==> kube-controller-manager [e470f2131890f0fd11a999a522fa58b7343b5b1eaf5220eb071c462ca05ba3de] <==
	I0915 07:37:49.673667       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m02"
	I0915 07:37:49.683059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="129.055µs"
	I0915 07:37:49.702226       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="55.709µs"
	I0915 07:37:53.674864       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m02"
	I0915 07:37:54.399496       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.492436ms"
	I0915 07:37:54.401051       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.03µs"
	I0915 07:38:00.485893       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m02"
	I0915 07:38:08.459504       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:08.483266       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:08.723810       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:08.724304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127008-m02"
	I0915 07:38:09.989648       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127008-m02"
	I0915 07:38:09.990016       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-127008-m03\" does not exist"
	I0915 07:38:10.004451       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-127008-m03" podCIDRs=["10.244.2.0/24"]
	I0915 07:38:10.004501       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:10.004805       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:10.015556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:10.267633       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:10.599889       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:13.777308       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:20.201381       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:28.547100       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:28.547485       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127008-m02"
	I0915 07:38:28.560483       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:28.694427       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	
	
	==> kube-proxy [63e0b614cde44fb2193d901af66ec3799ef5a9d4f531ee045ac8fccaea9d7ce6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 07:30:06.730652       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 07:30:06.745648       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.241"]
	E0915 07:30:06.745763       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 07:30:06.792645       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 07:30:06.792730       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 07:30:06.792767       1 server_linux.go:169] "Using iptables Proxier"
	I0915 07:30:06.795458       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 07:30:06.795759       1 server.go:483] "Version info" version="v1.31.1"
	I0915 07:30:06.795801       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:30:06.797662       1 config.go:199] "Starting service config controller"
	I0915 07:30:06.797719       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 07:30:06.797754       1 config.go:105] "Starting endpoint slice config controller"
	I0915 07:30:06.797769       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 07:30:06.798410       1 config.go:328] "Starting node config controller"
	I0915 07:30:06.798457       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 07:30:06.898351       1 shared_informer.go:320] Caches are synced for service config
	I0915 07:30:06.898444       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 07:30:06.898548       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d8350dbed2d0e1e1449148904a84fa5b148acd2edc069a29cf5e44d0d79d3193] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 07:36:51.912337       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 07:36:51.935128       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.241"]
	E0915 07:36:51.935274       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 07:36:52.004315       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 07:36:52.004372       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 07:36:52.004397       1 server_linux.go:169] "Using iptables Proxier"
	I0915 07:36:52.010730       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 07:36:52.011012       1 server.go:483] "Version info" version="v1.31.1"
	I0915 07:36:52.011027       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:36:52.016772       1 config.go:199] "Starting service config controller"
	I0915 07:36:52.016823       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 07:36:52.016861       1 config.go:105] "Starting endpoint slice config controller"
	I0915 07:36:52.016865       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 07:36:52.017491       1 config.go:328] "Starting node config controller"
	I0915 07:36:52.017520       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 07:36:52.117523       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 07:36:52.117584       1 shared_informer.go:320] Caches are synced for service config
	I0915 07:36:52.117834       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e199d57146177ff6ced3b5d5b3f612263313b1500a411a40e7342f53e3239d98] <==
	I0915 07:36:47.660148       1 serving.go:386] Generated self-signed cert in-memory
	W0915 07:36:50.006358       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0915 07:36:50.006449       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0915 07:36:50.006475       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0915 07:36:50.006487       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0915 07:36:50.121569       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0915 07:36:50.121668       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:36:50.135611       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0915 07:36:50.136133       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 07:36:50.138253       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 07:36:50.138344       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 07:36:50.239267       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fd304bb04be08dee1ece6b9daec0e0a3b587079dfff4e56cfbaf506e8083f3fe] <==
	E0915 07:29:58.089983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:58.090045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 07:29:58.090083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:58.906256       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 07:29:58.906289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:58.912246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 07:29:58.912329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:58.950085       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 07:29:58.950217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:59.056844       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 07:29:59.057167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:59.076573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 07:29:59.077335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:59.084278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 07:29:59.084327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:59.106687       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 07:29:59.106745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:59.134484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 07:29:59.134534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:59.177165       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 07:29:59.177253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:59.251824       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 07:29:59.251875       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0915 07:30:01.983239       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0915 07:35:01.842952       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 15 07:36:55 multinode-127008 kubelet[3052]: E0915 07:36:55.944998    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385815944626148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:36:55 multinode-127008 kubelet[3052]: E0915 07:36:55.945056    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385815944626148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:37:05 multinode-127008 kubelet[3052]: E0915 07:37:05.947343    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385825946317990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:37:05 multinode-127008 kubelet[3052]: E0915 07:37:05.947649    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385825946317990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:37:15 multinode-127008 kubelet[3052]: E0915 07:37:15.950035    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385835949733141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:37:15 multinode-127008 kubelet[3052]: E0915 07:37:15.950079    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385835949733141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:37:25 multinode-127008 kubelet[3052]: E0915 07:37:25.951802    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385845951401699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:37:25 multinode-127008 kubelet[3052]: E0915 07:37:25.951829    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385845951401699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:37:35 multinode-127008 kubelet[3052]: E0915 07:37:35.954250    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385855953678164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:37:35 multinode-127008 kubelet[3052]: E0915 07:37:35.956164    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385855953678164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:37:45 multinode-127008 kubelet[3052]: E0915 07:37:45.945025    3052 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 15 07:37:45 multinode-127008 kubelet[3052]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 15 07:37:45 multinode-127008 kubelet[3052]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 15 07:37:45 multinode-127008 kubelet[3052]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 15 07:37:45 multinode-127008 kubelet[3052]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 15 07:37:45 multinode-127008 kubelet[3052]: E0915 07:37:45.958662    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385865958380977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:37:45 multinode-127008 kubelet[3052]: E0915 07:37:45.958732    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385865958380977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:37:55 multinode-127008 kubelet[3052]: E0915 07:37:55.961137    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385875960539245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:37:55 multinode-127008 kubelet[3052]: E0915 07:37:55.961302    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385875960539245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:38:05 multinode-127008 kubelet[3052]: E0915 07:38:05.963464    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385885962517340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:38:05 multinode-127008 kubelet[3052]: E0915 07:38:05.964124    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385885962517340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:38:15 multinode-127008 kubelet[3052]: E0915 07:38:15.967443    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385895966882475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:38:15 multinode-127008 kubelet[3052]: E0915 07:38:15.967481    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385895966882475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:38:25 multinode-127008 kubelet[3052]: E0915 07:38:25.970265    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385905969161401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:38:25 multinode-127008 kubelet[3052]: E0915 07:38:25.970321    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385905969161401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:38:31.013695   46284 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19644-6166/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-127008 -n multinode-127008
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-127008 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (333.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 stop
E0915 07:39:05.748950   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-127008 stop: exit status 82 (2m0.461169816s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-127008-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-127008 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-127008 status: exit status 3 (18.642226534s)

                                                
                                                
-- stdout --
	multinode-127008
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-127008-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:40:53.970151   46947 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	E0915 07:40:53.970197   46947 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-127008 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-127008 -n multinode-127008
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-127008 logs -n 25: (1.446377928s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-127008 cp multinode-127008-m02:/home/docker/cp-test.txt                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008:/home/docker/cp-test_multinode-127008-m02_multinode-127008.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n multinode-127008 sudo cat                                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | /home/docker/cp-test_multinode-127008-m02_multinode-127008.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-127008 cp multinode-127008-m02:/home/docker/cp-test.txt                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m03:/home/docker/cp-test_multinode-127008-m02_multinode-127008-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n multinode-127008-m03 sudo cat                                   | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | /home/docker/cp-test_multinode-127008-m02_multinode-127008-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-127008 cp testdata/cp-test.txt                                                | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-127008 cp multinode-127008-m03:/home/docker/cp-test.txt                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4167936864/001/cp-test_multinode-127008-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-127008 cp multinode-127008-m03:/home/docker/cp-test.txt                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008:/home/docker/cp-test_multinode-127008-m03_multinode-127008.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n multinode-127008 sudo cat                                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | /home/docker/cp-test_multinode-127008-m03_multinode-127008.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-127008 cp multinode-127008-m03:/home/docker/cp-test.txt                       | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m02:/home/docker/cp-test_multinode-127008-m03_multinode-127008-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n                                                                 | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | multinode-127008-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-127008 ssh -n multinode-127008-m02 sudo cat                                   | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | /home/docker/cp-test_multinode-127008-m03_multinode-127008-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-127008 node stop m03                                                          | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	| node    | multinode-127008 node start                                                             | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC | 15 Sep 24 07:32 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-127008                                                                | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC |                     |
	| stop    | -p multinode-127008                                                                     | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:32 UTC |                     |
	| start   | -p multinode-127008                                                                     | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:35 UTC | 15 Sep 24 07:38 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-127008                                                                | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:38 UTC |                     |
	| node    | multinode-127008 node delete                                                            | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:38 UTC | 15 Sep 24 07:38 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-127008 stop                                                                   | multinode-127008 | jenkins | v1.34.0 | 15 Sep 24 07:38 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 07:35:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 07:35:00.954733   45126 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:35:00.955001   45126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:35:00.955011   45126 out.go:358] Setting ErrFile to fd 2...
	I0915 07:35:00.955015   45126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:35:00.955207   45126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:35:00.955745   45126 out.go:352] Setting JSON to false
	I0915 07:35:00.956677   45126 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4647,"bootTime":1726381054,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 07:35:00.956777   45126 start.go:139] virtualization: kvm guest
	I0915 07:35:00.958970   45126 out.go:177] * [multinode-127008] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 07:35:00.960179   45126 notify.go:220] Checking for updates...
	I0915 07:35:00.960268   45126 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:35:00.961462   45126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:35:00.962842   45126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:35:00.964019   45126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:35:00.965294   45126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 07:35:00.966510   45126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:35:00.968289   45126 config.go:182] Loaded profile config "multinode-127008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:35:00.968412   45126 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:35:00.968835   45126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:35:00.968889   45126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:35:00.984994   45126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0915 07:35:00.985467   45126 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:35:00.986024   45126 main.go:141] libmachine: Using API Version  1
	I0915 07:35:00.986062   45126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:35:00.986441   45126 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:35:00.986593   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:35:01.022242   45126 out.go:177] * Using the kvm2 driver based on existing profile
	I0915 07:35:01.023749   45126 start.go:297] selected driver: kvm2
	I0915 07:35:01.023769   45126 start.go:901] validating driver "kvm2" against &{Name:multinode-127008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-127008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:35:01.023960   45126 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:35:01.024393   45126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:35:01.024488   45126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 07:35:01.039311   45126 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 07:35:01.040088   45126 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:35:01.040124   45126 cni.go:84] Creating CNI manager for ""
	I0915 07:35:01.040179   45126 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0915 07:35:01.040239   45126 start.go:340] cluster config:
	{Name:multinode-127008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-127008 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:35:01.040356   45126 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:35:01.042265   45126 out.go:177] * Starting "multinode-127008" primary control-plane node in "multinode-127008" cluster
	I0915 07:35:01.043603   45126 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:35:01.043645   45126 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 07:35:01.043653   45126 cache.go:56] Caching tarball of preloaded images
	I0915 07:35:01.043726   45126 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:35:01.043737   45126 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:35:01.043861   45126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/config.json ...
	I0915 07:35:01.044058   45126 start.go:360] acquireMachinesLock for multinode-127008: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:35:01.044100   45126 start.go:364] duration metric: took 23.436µs to acquireMachinesLock for "multinode-127008"
	I0915 07:35:01.044111   45126 start.go:96] Skipping create...Using existing machine configuration
	I0915 07:35:01.044116   45126 fix.go:54] fixHost starting: 
	I0915 07:35:01.044382   45126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:35:01.044412   45126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:35:01.058836   45126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42837
	I0915 07:35:01.059324   45126 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:35:01.059773   45126 main.go:141] libmachine: Using API Version  1
	I0915 07:35:01.059793   45126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:35:01.060071   45126 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:35:01.060259   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:35:01.060392   45126 main.go:141] libmachine: (multinode-127008) Calling .GetState
	I0915 07:35:01.062106   45126 fix.go:112] recreateIfNeeded on multinode-127008: state=Running err=<nil>
	W0915 07:35:01.062134   45126 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 07:35:01.064320   45126 out.go:177] * Updating the running kvm2 "multinode-127008" VM ...
	I0915 07:35:01.065721   45126 machine.go:93] provisionDockerMachine start ...
	I0915 07:35:01.065756   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:35:01.065972   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:35:01.068526   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.068970   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:35:01.068999   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.069087   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:35:01.069255   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.069411   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.069530   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:35:01.069670   45126 main.go:141] libmachine: Using SSH client type: native
	I0915 07:35:01.069868   45126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0915 07:35:01.069879   45126 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 07:35:01.183481   45126 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-127008
	
	I0915 07:35:01.183505   45126 main.go:141] libmachine: (multinode-127008) Calling .GetMachineName
	I0915 07:35:01.183719   45126 buildroot.go:166] provisioning hostname "multinode-127008"
	I0915 07:35:01.183740   45126 main.go:141] libmachine: (multinode-127008) Calling .GetMachineName
	I0915 07:35:01.183921   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:35:01.186468   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.186837   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:35:01.186865   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.187021   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:35:01.187209   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.187340   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.187552   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:35:01.187742   45126 main.go:141] libmachine: Using SSH client type: native
	I0915 07:35:01.187910   45126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0915 07:35:01.187926   45126 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-127008 && echo "multinode-127008" | sudo tee /etc/hostname
	I0915 07:35:01.306241   45126 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-127008
	
	I0915 07:35:01.306278   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:35:01.308993   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.309361   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:35:01.309403   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.309573   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:35:01.309746   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.309887   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.310007   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:35:01.310148   45126 main.go:141] libmachine: Using SSH client type: native
	I0915 07:35:01.310338   45126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0915 07:35:01.310354   45126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-127008' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-127008/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-127008' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:35:01.415164   45126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:35:01.415197   45126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 07:35:01.415215   45126 buildroot.go:174] setting up certificates
	I0915 07:35:01.415224   45126 provision.go:84] configureAuth start
	I0915 07:35:01.415252   45126 main.go:141] libmachine: (multinode-127008) Calling .GetMachineName
	I0915 07:35:01.415554   45126 main.go:141] libmachine: (multinode-127008) Calling .GetIP
	I0915 07:35:01.418195   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.418593   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:35:01.418629   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.418766   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:35:01.420920   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.421220   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:35:01.421246   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.421373   45126 provision.go:143] copyHostCerts
	I0915 07:35:01.421407   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:35:01.421444   45126 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 07:35:01.421453   45126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:35:01.421533   45126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 07:35:01.421657   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:35:01.421677   45126 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 07:35:01.421682   45126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:35:01.421707   45126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 07:35:01.421762   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:35:01.421779   45126 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 07:35:01.421782   45126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:35:01.421803   45126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 07:35:01.421896   45126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.multinode-127008 san=[127.0.0.1 192.168.39.241 localhost minikube multinode-127008]
	I0915 07:35:01.545313   45126 provision.go:177] copyRemoteCerts
	I0915 07:35:01.545372   45126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:35:01.545393   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:35:01.548002   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.548453   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:35:01.548491   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.548697   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:35:01.548886   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.549057   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:35:01.549234   45126 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/multinode-127008/id_rsa Username:docker}
	I0915 07:35:01.632985   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0915 07:35:01.633058   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:35:01.659513   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0915 07:35:01.659598   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0915 07:35:01.686974   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0915 07:35:01.687061   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 07:35:01.713790   45126 provision.go:87] duration metric: took 298.552813ms to configureAuth
	I0915 07:35:01.713850   45126 buildroot.go:189] setting minikube options for container-runtime
	I0915 07:35:01.714170   45126 config.go:182] Loaded profile config "multinode-127008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:35:01.714267   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:35:01.717177   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.717523   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:35:01.717554   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:35:01.717728   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:35:01.717935   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.718090   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:35:01.718240   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:35:01.718369   45126 main.go:141] libmachine: Using SSH client type: native
	I0915 07:35:01.718534   45126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0915 07:35:01.718553   45126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:36:32.502356   45126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:36:32.502388   45126 machine.go:96] duration metric: took 1m31.436651117s to provisionDockerMachine
	I0915 07:36:32.502404   45126 start.go:293] postStartSetup for "multinode-127008" (driver="kvm2")
	I0915 07:36:32.502417   45126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:36:32.502439   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:36:32.502768   45126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:36:32.502797   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:36:32.505846   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.506255   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:36:32.506276   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.506471   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:36:32.506660   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:36:32.506899   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:36:32.507031   45126 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/multinode-127008/id_rsa Username:docker}
	I0915 07:36:32.589664   45126 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:36:32.594064   45126 command_runner.go:130] > NAME=Buildroot
	I0915 07:36:32.594085   45126 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0915 07:36:32.594098   45126 command_runner.go:130] > ID=buildroot
	I0915 07:36:32.594104   45126 command_runner.go:130] > VERSION_ID=2023.02.9
	I0915 07:36:32.594110   45126 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0915 07:36:32.594376   45126 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 07:36:32.594408   45126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 07:36:32.594503   45126 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 07:36:32.594579   45126 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 07:36:32.594588   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /etc/ssl/certs/131902.pem
	I0915 07:36:32.594677   45126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:36:32.603919   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:36:32.631957   45126 start.go:296] duration metric: took 129.540503ms for postStartSetup
	I0915 07:36:32.632000   45126 fix.go:56] duration metric: took 1m31.587884451s for fixHost
	I0915 07:36:32.632029   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:36:32.634754   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.635253   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:36:32.635275   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.635486   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:36:32.635674   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:36:32.635818   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:36:32.635928   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:36:32.636048   45126 main.go:141] libmachine: Using SSH client type: native
	I0915 07:36:32.636233   45126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0915 07:36:32.636246   45126 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 07:36:32.747090   45126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726385792.723897422
	
	I0915 07:36:32.747111   45126 fix.go:216] guest clock: 1726385792.723897422
	I0915 07:36:32.747121   45126 fix.go:229] Guest: 2024-09-15 07:36:32.723897422 +0000 UTC Remote: 2024-09-15 07:36:32.632004342 +0000 UTC m=+91.712737373 (delta=91.89308ms)
	I0915 07:36:32.747144   45126 fix.go:200] guest clock delta is within tolerance: 91.89308ms
	I0915 07:36:32.747150   45126 start.go:83] releasing machines lock for "multinode-127008", held for 1m31.703043829s
	I0915 07:36:32.747176   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:36:32.747478   45126 main.go:141] libmachine: (multinode-127008) Calling .GetIP
	I0915 07:36:32.750733   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.751273   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:36:32.751297   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.751493   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:36:32.752031   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:36:32.752237   45126 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:36:32.752339   45126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:36:32.752381   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:36:32.752462   45126 ssh_runner.go:195] Run: cat /version.json
	I0915 07:36:32.752482   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:36:32.755177   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.755395   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.755609   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:36:32.755636   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.755809   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:36:32.755881   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:36:32.755910   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:32.755967   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:36:32.756054   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:36:32.756134   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:36:32.756215   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:36:32.756298   45126 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/multinode-127008/id_rsa Username:docker}
	I0915 07:36:32.756390   45126 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:36:32.756538   45126 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/multinode-127008/id_rsa Username:docker}
	I0915 07:36:32.913453   45126 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0915 07:36:32.913530   45126 command_runner.go:130] > {"iso_version": "v1.34.0-1726358414-19644", "kicbase_version": "v0.0.45-1726281268-19643", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0915 07:36:32.913681   45126 ssh_runner.go:195] Run: systemctl --version
	I0915 07:36:32.919526   45126 command_runner.go:130] > systemd 252 (252)
	I0915 07:36:32.919554   45126 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0915 07:36:32.919758   45126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:36:33.102689   45126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 07:36:33.108757   45126 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0915 07:36:33.108830   45126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 07:36:33.108884   45126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:36:33.118506   45126 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0915 07:36:33.118530   45126 start.go:495] detecting cgroup driver to use...
	I0915 07:36:33.118594   45126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:36:33.135836   45126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:36:33.150605   45126 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:36:33.150663   45126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:36:33.164563   45126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:36:33.178132   45126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:36:33.320377   45126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:36:33.465271   45126 docker.go:233] disabling docker service ...
	I0915 07:36:33.465350   45126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:36:33.483545   45126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:36:33.496949   45126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:36:33.635434   45126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:36:33.781559   45126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:36:33.796250   45126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:36:33.815779   45126 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0915 07:36:33.815824   45126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:36:33.815880   45126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:36:33.827370   45126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:36:33.827455   45126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:36:33.838730   45126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:36:33.849411   45126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:36:33.859991   45126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:36:33.871461   45126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:36:33.882502   45126 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:36:33.893754   45126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:36:33.905143   45126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:36:33.914724   45126 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0915 07:36:33.914826   45126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:36:33.924236   45126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:36:34.058598   45126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:36:43.325857   45126 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.267217442s)
	I0915 07:36:43.325894   45126 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:36:43.325953   45126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:36:43.331511   45126 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0915 07:36:43.331540   45126 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0915 07:36:43.331551   45126 command_runner.go:130] > Device: 0,22	Inode: 1386        Links: 1
	I0915 07:36:43.331561   45126 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0915 07:36:43.331568   45126 command_runner.go:130] > Access: 2024-09-15 07:36:43.157037321 +0000
	I0915 07:36:43.331577   45126 command_runner.go:130] > Modify: 2024-09-15 07:36:43.157037321 +0000
	I0915 07:36:43.331586   45126 command_runner.go:130] > Change: 2024-09-15 07:36:43.157037321 +0000
	I0915 07:36:43.331613   45126 command_runner.go:130] >  Birth: -
	I0915 07:36:43.331635   45126 start.go:563] Will wait 60s for crictl version
	I0915 07:36:43.331676   45126 ssh_runner.go:195] Run: which crictl
	I0915 07:36:43.335659   45126 command_runner.go:130] > /usr/bin/crictl
	I0915 07:36:43.335736   45126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:36:43.373587   45126 command_runner.go:130] > Version:  0.1.0
	I0915 07:36:43.373617   45126 command_runner.go:130] > RuntimeName:  cri-o
	I0915 07:36:43.373624   45126 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0915 07:36:43.373632   45126 command_runner.go:130] > RuntimeApiVersion:  v1
	I0915 07:36:43.374910   45126 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 07:36:43.374993   45126 ssh_runner.go:195] Run: crio --version
	I0915 07:36:43.403125   45126 command_runner.go:130] > crio version 1.29.1
	I0915 07:36:43.403147   45126 command_runner.go:130] > Version:        1.29.1
	I0915 07:36:43.403152   45126 command_runner.go:130] > GitCommit:      unknown
	I0915 07:36:43.403157   45126 command_runner.go:130] > GitCommitDate:  unknown
	I0915 07:36:43.403161   45126 command_runner.go:130] > GitTreeState:   clean
	I0915 07:36:43.403166   45126 command_runner.go:130] > BuildDate:      2024-09-15T05:30:16Z
	I0915 07:36:43.403171   45126 command_runner.go:130] > GoVersion:      go1.21.6
	I0915 07:36:43.403174   45126 command_runner.go:130] > Compiler:       gc
	I0915 07:36:43.403179   45126 command_runner.go:130] > Platform:       linux/amd64
	I0915 07:36:43.403183   45126 command_runner.go:130] > Linkmode:       dynamic
	I0915 07:36:43.403187   45126 command_runner.go:130] > BuildTags:      
	I0915 07:36:43.403191   45126 command_runner.go:130] >   containers_image_ostree_stub
	I0915 07:36:43.403195   45126 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0915 07:36:43.403228   45126 command_runner.go:130] >   btrfs_noversion
	I0915 07:36:43.403240   45126 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0915 07:36:43.403244   45126 command_runner.go:130] >   libdm_no_deferred_remove
	I0915 07:36:43.403247   45126 command_runner.go:130] >   seccomp
	I0915 07:36:43.403252   45126 command_runner.go:130] > LDFlags:          unknown
	I0915 07:36:43.403258   45126 command_runner.go:130] > SeccompEnabled:   true
	I0915 07:36:43.403263   45126 command_runner.go:130] > AppArmorEnabled:  false
	I0915 07:36:43.404583   45126 ssh_runner.go:195] Run: crio --version
	I0915 07:36:43.437258   45126 command_runner.go:130] > crio version 1.29.1
	I0915 07:36:43.437285   45126 command_runner.go:130] > Version:        1.29.1
	I0915 07:36:43.437294   45126 command_runner.go:130] > GitCommit:      unknown
	I0915 07:36:43.437301   45126 command_runner.go:130] > GitCommitDate:  unknown
	I0915 07:36:43.437307   45126 command_runner.go:130] > GitTreeState:   clean
	I0915 07:36:43.437317   45126 command_runner.go:130] > BuildDate:      2024-09-15T05:30:16Z
	I0915 07:36:43.437323   45126 command_runner.go:130] > GoVersion:      go1.21.6
	I0915 07:36:43.437330   45126 command_runner.go:130] > Compiler:       gc
	I0915 07:36:43.437338   45126 command_runner.go:130] > Platform:       linux/amd64
	I0915 07:36:43.437345   45126 command_runner.go:130] > Linkmode:       dynamic
	I0915 07:36:43.437352   45126 command_runner.go:130] > BuildTags:      
	I0915 07:36:43.437359   45126 command_runner.go:130] >   containers_image_ostree_stub
	I0915 07:36:43.437367   45126 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0915 07:36:43.437376   45126 command_runner.go:130] >   btrfs_noversion
	I0915 07:36:43.437387   45126 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0915 07:36:43.437397   45126 command_runner.go:130] >   libdm_no_deferred_remove
	I0915 07:36:43.437403   45126 command_runner.go:130] >   seccomp
	I0915 07:36:43.437413   45126 command_runner.go:130] > LDFlags:          unknown
	I0915 07:36:43.437423   45126 command_runner.go:130] > SeccompEnabled:   true
	I0915 07:36:43.437432   45126 command_runner.go:130] > AppArmorEnabled:  false
	I0915 07:36:43.440531   45126 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 07:36:43.441726   45126 main.go:141] libmachine: (multinode-127008) Calling .GetIP
	I0915 07:36:43.444518   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:43.444860   45126 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:36:43.444887   45126 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:36:43.445121   45126 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0915 07:36:43.449545   45126 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0915 07:36:43.449643   45126 kubeadm.go:883] updating cluster {Name:multinode-127008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-127008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 07:36:43.449794   45126 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:36:43.449871   45126 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:36:43.492940   45126 command_runner.go:130] > {
	I0915 07:36:43.492965   45126 command_runner.go:130] >   "images": [
	I0915 07:36:43.492971   45126 command_runner.go:130] >     {
	I0915 07:36:43.492983   45126 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0915 07:36:43.492988   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.492995   45126 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0915 07:36:43.492998   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493002   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493009   45126 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0915 07:36:43.493018   45126 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0915 07:36:43.493023   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493029   45126 command_runner.go:130] >       "size": "87190579",
	I0915 07:36:43.493035   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.493042   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.493051   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.493062   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.493068   45126 command_runner.go:130] >     },
	I0915 07:36:43.493074   45126 command_runner.go:130] >     {
	I0915 07:36:43.493080   45126 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0915 07:36:43.493083   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.493090   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0915 07:36:43.493094   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493103   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493112   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0915 07:36:43.493126   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0915 07:36:43.493136   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493142   45126 command_runner.go:130] >       "size": "1363676",
	I0915 07:36:43.493152   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.493166   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.493175   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.493181   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.493185   45126 command_runner.go:130] >     },
	I0915 07:36:43.493189   45126 command_runner.go:130] >     {
	I0915 07:36:43.493203   45126 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0915 07:36:43.493209   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.493217   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0915 07:36:43.493225   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493232   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493248   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0915 07:36:43.493263   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0915 07:36:43.493272   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493279   45126 command_runner.go:130] >       "size": "31470524",
	I0915 07:36:43.493287   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.493293   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.493301   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.493306   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.493311   45126 command_runner.go:130] >     },
	I0915 07:36:43.493315   45126 command_runner.go:130] >     {
	I0915 07:36:43.493321   45126 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0915 07:36:43.493329   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.493340   45126 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0915 07:36:43.493349   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493359   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493371   45126 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0915 07:36:43.493391   45126 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0915 07:36:43.493401   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493409   45126 command_runner.go:130] >       "size": "63273227",
	I0915 07:36:43.493415   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.493422   45126 command_runner.go:130] >       "username": "nonroot",
	I0915 07:36:43.493431   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.493440   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.493449   45126 command_runner.go:130] >     },
	I0915 07:36:43.493458   45126 command_runner.go:130] >     {
	I0915 07:36:43.493470   45126 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0915 07:36:43.493479   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.493487   45126 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0915 07:36:43.493494   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493498   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493508   45126 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0915 07:36:43.493522   45126 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0915 07:36:43.493531   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493537   45126 command_runner.go:130] >       "size": "149009664",
	I0915 07:36:43.493546   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.493552   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.493560   45126 command_runner.go:130] >       },
	I0915 07:36:43.493567   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.493576   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.493581   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.493586   45126 command_runner.go:130] >     },
	I0915 07:36:43.493591   45126 command_runner.go:130] >     {
	I0915 07:36:43.493603   45126 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0915 07:36:43.493613   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.493622   45126 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0915 07:36:43.493630   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493640   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493654   45126 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0915 07:36:43.493668   45126 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0915 07:36:43.493677   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493685   45126 command_runner.go:130] >       "size": "95237600",
	I0915 07:36:43.493692   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.493698   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.493706   45126 command_runner.go:130] >       },
	I0915 07:36:43.493713   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.493723   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.493732   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.493740   45126 command_runner.go:130] >     },
	I0915 07:36:43.493748   45126 command_runner.go:130] >     {
	I0915 07:36:43.493761   45126 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0915 07:36:43.493769   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.493776   45126 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0915 07:36:43.493781   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493790   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493816   45126 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0915 07:36:43.493832   45126 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0915 07:36:43.493841   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493850   45126 command_runner.go:130] >       "size": "89437508",
	I0915 07:36:43.493858   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.493866   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.493873   45126 command_runner.go:130] >       },
	I0915 07:36:43.493880   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.493888   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.493896   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.493906   45126 command_runner.go:130] >     },
	I0915 07:36:43.493912   45126 command_runner.go:130] >     {
	I0915 07:36:43.493925   45126 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0915 07:36:43.493934   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.493945   45126 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0915 07:36:43.493953   45126 command_runner.go:130] >       ],
	I0915 07:36:43.493962   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.493979   45126 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0915 07:36:43.493992   45126 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0915 07:36:43.494001   45126 command_runner.go:130] >       ],
	I0915 07:36:43.494011   45126 command_runner.go:130] >       "size": "92733849",
	I0915 07:36:43.494021   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.494028   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.494035   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.494041   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.494046   45126 command_runner.go:130] >     },
	I0915 07:36:43.494051   45126 command_runner.go:130] >     {
	I0915 07:36:43.494061   45126 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0915 07:36:43.494066   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.494071   45126 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0915 07:36:43.494077   45126 command_runner.go:130] >       ],
	I0915 07:36:43.494084   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.494098   45126 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0915 07:36:43.494110   45126 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0915 07:36:43.494116   45126 command_runner.go:130] >       ],
	I0915 07:36:43.494122   45126 command_runner.go:130] >       "size": "68420934",
	I0915 07:36:43.494128   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.494134   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.494140   45126 command_runner.go:130] >       },
	I0915 07:36:43.494168   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.494175   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.494181   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.494186   45126 command_runner.go:130] >     },
	I0915 07:36:43.494192   45126 command_runner.go:130] >     {
	I0915 07:36:43.494207   45126 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0915 07:36:43.494216   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.494224   45126 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0915 07:36:43.494232   45126 command_runner.go:130] >       ],
	I0915 07:36:43.494239   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.494252   45126 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0915 07:36:43.494262   45126 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0915 07:36:43.494268   45126 command_runner.go:130] >       ],
	I0915 07:36:43.494280   45126 command_runner.go:130] >       "size": "742080",
	I0915 07:36:43.494289   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.494297   45126 command_runner.go:130] >         "value": "65535"
	I0915 07:36:43.494306   45126 command_runner.go:130] >       },
	I0915 07:36:43.494315   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.494325   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.494333   45126 command_runner.go:130] >       "pinned": true
	I0915 07:36:43.494341   45126 command_runner.go:130] >     }
	I0915 07:36:43.494349   45126 command_runner.go:130] >   ]
	I0915 07:36:43.494354   45126 command_runner.go:130] > }
	I0915 07:36:43.494833   45126 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 07:36:43.494858   45126 crio.go:433] Images already preloaded, skipping extraction
	I0915 07:36:43.494916   45126 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:36:43.531014   45126 command_runner.go:130] > {
	I0915 07:36:43.531035   45126 command_runner.go:130] >   "images": [
	I0915 07:36:43.531039   45126 command_runner.go:130] >     {
	I0915 07:36:43.531047   45126 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0915 07:36:43.531052   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.531058   45126 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0915 07:36:43.531061   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531065   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.531074   45126 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0915 07:36:43.531085   45126 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0915 07:36:43.531090   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531098   45126 command_runner.go:130] >       "size": "87190579",
	I0915 07:36:43.531104   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.531110   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.531118   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.531124   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.531134   45126 command_runner.go:130] >     },
	I0915 07:36:43.531139   45126 command_runner.go:130] >     {
	I0915 07:36:43.531146   45126 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0915 07:36:43.531153   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.531159   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0915 07:36:43.531162   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531167   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.531177   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0915 07:36:43.531192   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0915 07:36:43.531202   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531209   45126 command_runner.go:130] >       "size": "1363676",
	I0915 07:36:43.531218   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.531230   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.531237   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.531243   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.531250   45126 command_runner.go:130] >     },
	I0915 07:36:43.531256   45126 command_runner.go:130] >     {
	I0915 07:36:43.531272   45126 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0915 07:36:43.531282   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.531291   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0915 07:36:43.531299   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531306   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.531319   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0915 07:36:43.531330   45126 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0915 07:36:43.531336   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531342   45126 command_runner.go:130] >       "size": "31470524",
	I0915 07:36:43.531352   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.531358   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.531365   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.531371   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.531379   45126 command_runner.go:130] >     },
	I0915 07:36:43.531385   45126 command_runner.go:130] >     {
	I0915 07:36:43.531402   45126 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0915 07:36:43.531410   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.531415   45126 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0915 07:36:43.531421   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531431   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.531446   45126 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0915 07:36:43.531468   45126 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0915 07:36:43.531478   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531485   45126 command_runner.go:130] >       "size": "63273227",
	I0915 07:36:43.531491   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.531496   45126 command_runner.go:130] >       "username": "nonroot",
	I0915 07:36:43.531499   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.531505   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.531514   45126 command_runner.go:130] >     },
	I0915 07:36:43.531519   45126 command_runner.go:130] >     {
	I0915 07:36:43.531531   45126 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0915 07:36:43.531541   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.531548   45126 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0915 07:36:43.531564   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531574   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.531582   45126 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0915 07:36:43.531591   45126 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0915 07:36:43.531600   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531608   45126 command_runner.go:130] >       "size": "149009664",
	I0915 07:36:43.531617   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.531626   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.531635   45126 command_runner.go:130] >       },
	I0915 07:36:43.531644   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.531653   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.531662   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.531668   45126 command_runner.go:130] >     },
	I0915 07:36:43.531672   45126 command_runner.go:130] >     {
	I0915 07:36:43.531683   45126 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0915 07:36:43.531692   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.531703   45126 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0915 07:36:43.531712   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531721   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.531736   45126 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0915 07:36:43.531749   45126 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0915 07:36:43.531755   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531760   45126 command_runner.go:130] >       "size": "95237600",
	I0915 07:36:43.531769   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.531778   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.531787   45126 command_runner.go:130] >       },
	I0915 07:36:43.531796   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.531805   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.531814   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.531822   45126 command_runner.go:130] >     },
	I0915 07:36:43.531831   45126 command_runner.go:130] >     {
	I0915 07:36:43.531840   45126 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0915 07:36:43.531847   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.531862   45126 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0915 07:36:43.531872   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531882   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.531896   45126 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0915 07:36:43.531911   45126 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0915 07:36:43.531920   45126 command_runner.go:130] >       ],
	I0915 07:36:43.531926   45126 command_runner.go:130] >       "size": "89437508",
	I0915 07:36:43.531930   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.531939   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.531947   45126 command_runner.go:130] >       },
	I0915 07:36:43.531957   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.531966   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.531973   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.531981   45126 command_runner.go:130] >     },
	I0915 07:36:43.531989   45126 command_runner.go:130] >     {
	I0915 07:36:43.532001   45126 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0915 07:36:43.532008   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.532014   45126 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0915 07:36:43.532021   45126 command_runner.go:130] >       ],
	I0915 07:36:43.532030   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.532065   45126 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0915 07:36:43.532080   45126 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0915 07:36:43.532085   45126 command_runner.go:130] >       ],
	I0915 07:36:43.532091   45126 command_runner.go:130] >       "size": "92733849",
	I0915 07:36:43.532098   45126 command_runner.go:130] >       "uid": null,
	I0915 07:36:43.532102   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.532111   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.532117   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.532123   45126 command_runner.go:130] >     },
	I0915 07:36:43.532128   45126 command_runner.go:130] >     {
	I0915 07:36:43.532137   45126 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0915 07:36:43.532146   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.532156   45126 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0915 07:36:43.532171   45126 command_runner.go:130] >       ],
	I0915 07:36:43.532179   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.532186   45126 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0915 07:36:43.532199   45126 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0915 07:36:43.532208   45126 command_runner.go:130] >       ],
	I0915 07:36:43.532215   45126 command_runner.go:130] >       "size": "68420934",
	I0915 07:36:43.532224   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.532231   45126 command_runner.go:130] >         "value": "0"
	I0915 07:36:43.532239   45126 command_runner.go:130] >       },
	I0915 07:36:43.532246   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.532255   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.532261   45126 command_runner.go:130] >       "pinned": false
	I0915 07:36:43.532268   45126 command_runner.go:130] >     },
	I0915 07:36:43.532271   45126 command_runner.go:130] >     {
	I0915 07:36:43.532279   45126 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0915 07:36:43.532288   45126 command_runner.go:130] >       "repoTags": [
	I0915 07:36:43.532299   45126 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0915 07:36:43.532307   45126 command_runner.go:130] >       ],
	I0915 07:36:43.532316   45126 command_runner.go:130] >       "repoDigests": [
	I0915 07:36:43.532327   45126 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0915 07:36:43.532340   45126 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0915 07:36:43.532348   45126 command_runner.go:130] >       ],
	I0915 07:36:43.532355   45126 command_runner.go:130] >       "size": "742080",
	I0915 07:36:43.532360   45126 command_runner.go:130] >       "uid": {
	I0915 07:36:43.532366   45126 command_runner.go:130] >         "value": "65535"
	I0915 07:36:43.532375   45126 command_runner.go:130] >       },
	I0915 07:36:43.532381   45126 command_runner.go:130] >       "username": "",
	I0915 07:36:43.532390   45126 command_runner.go:130] >       "spec": null,
	I0915 07:36:43.532405   45126 command_runner.go:130] >       "pinned": true
	I0915 07:36:43.532413   45126 command_runner.go:130] >     }
	I0915 07:36:43.532418   45126 command_runner.go:130] >   ]
	I0915 07:36:43.532425   45126 command_runner.go:130] > }
	I0915 07:36:43.532578   45126 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 07:36:43.532600   45126 cache_images.go:84] Images are preloaded, skipping loading
	I0915 07:36:43.532608   45126 kubeadm.go:934] updating node { 192.168.39.241 8443 v1.31.1 crio true true} ...
	I0915 07:36:43.532727   45126 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-127008 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-127008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:36:43.532809   45126 ssh_runner.go:195] Run: crio config
	I0915 07:36:43.572249   45126 command_runner.go:130] ! time="2024-09-15 07:36:43.549944692Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0915 07:36:43.577673   45126 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0915 07:36:43.583465   45126 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0915 07:36:43.583488   45126 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0915 07:36:43.583495   45126 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0915 07:36:43.583498   45126 command_runner.go:130] > #
	I0915 07:36:43.583505   45126 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0915 07:36:43.583511   45126 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0915 07:36:43.583518   45126 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0915 07:36:43.583527   45126 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0915 07:36:43.583533   45126 command_runner.go:130] > # reload'.
	I0915 07:36:43.583543   45126 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0915 07:36:43.583555   45126 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0915 07:36:43.583568   45126 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0915 07:36:43.583575   45126 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0915 07:36:43.583578   45126 command_runner.go:130] > [crio]
	I0915 07:36:43.583595   45126 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0915 07:36:43.583602   45126 command_runner.go:130] > # containers images, in this directory.
	I0915 07:36:43.583607   45126 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0915 07:36:43.583623   45126 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0915 07:36:43.583634   45126 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0915 07:36:43.583647   45126 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0915 07:36:43.583661   45126 command_runner.go:130] > # imagestore = ""
	I0915 07:36:43.583683   45126 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0915 07:36:43.583694   45126 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0915 07:36:43.583699   45126 command_runner.go:130] > storage_driver = "overlay"
	I0915 07:36:43.583704   45126 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0915 07:36:43.583715   45126 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0915 07:36:43.583724   45126 command_runner.go:130] > storage_option = [
	I0915 07:36:43.583735   45126 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0915 07:36:43.583741   45126 command_runner.go:130] > ]
	I0915 07:36:43.583755   45126 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0915 07:36:43.583767   45126 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0915 07:36:43.583777   45126 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0915 07:36:43.583789   45126 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0915 07:36:43.583801   45126 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0915 07:36:43.583807   45126 command_runner.go:130] > # always happen on a node reboot
	I0915 07:36:43.583814   45126 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0915 07:36:43.583835   45126 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0915 07:36:43.583848   45126 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0915 07:36:43.583859   45126 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0915 07:36:43.583870   45126 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0915 07:36:43.583883   45126 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0915 07:36:43.583898   45126 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0915 07:36:43.583907   45126 command_runner.go:130] > # internal_wipe = true
	I0915 07:36:43.583918   45126 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0915 07:36:43.583929   45126 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0915 07:36:43.583938   45126 command_runner.go:130] > # internal_repair = false
	I0915 07:36:43.583947   45126 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0915 07:36:43.583961   45126 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0915 07:36:43.583972   45126 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0915 07:36:43.583984   45126 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0915 07:36:43.583995   45126 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0915 07:36:43.584004   45126 command_runner.go:130] > [crio.api]
	I0915 07:36:43.584012   45126 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0915 07:36:43.584022   45126 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0915 07:36:43.584035   45126 command_runner.go:130] > # IP address on which the stream server will listen.
	I0915 07:36:43.584045   45126 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0915 07:36:43.584059   45126 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0915 07:36:43.584070   45126 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0915 07:36:43.584079   45126 command_runner.go:130] > # stream_port = "0"
	I0915 07:36:43.584090   45126 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0915 07:36:43.584098   45126 command_runner.go:130] > # stream_enable_tls = false
	I0915 07:36:43.584105   45126 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0915 07:36:43.584113   45126 command_runner.go:130] > # stream_idle_timeout = ""
	I0915 07:36:43.584126   45126 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0915 07:36:43.584139   45126 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0915 07:36:43.584145   45126 command_runner.go:130] > # minutes.
	I0915 07:36:43.584151   45126 command_runner.go:130] > # stream_tls_cert = ""
	I0915 07:36:43.584163   45126 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0915 07:36:43.584175   45126 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0915 07:36:43.584184   45126 command_runner.go:130] > # stream_tls_key = ""
	I0915 07:36:43.584190   45126 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0915 07:36:43.584207   45126 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0915 07:36:43.584231   45126 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0915 07:36:43.584241   45126 command_runner.go:130] > # stream_tls_ca = ""
	I0915 07:36:43.584253   45126 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0915 07:36:43.584261   45126 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0915 07:36:43.584273   45126 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0915 07:36:43.584283   45126 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0915 07:36:43.584291   45126 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0915 07:36:43.584301   45126 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0915 07:36:43.584310   45126 command_runner.go:130] > [crio.runtime]
	I0915 07:36:43.584320   45126 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0915 07:36:43.584332   45126 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0915 07:36:43.584342   45126 command_runner.go:130] > # "nofile=1024:2048"
	I0915 07:36:43.584354   45126 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0915 07:36:43.584364   45126 command_runner.go:130] > # default_ulimits = [
	I0915 07:36:43.584372   45126 command_runner.go:130] > # ]
	I0915 07:36:43.584379   45126 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0915 07:36:43.584387   45126 command_runner.go:130] > # no_pivot = false
	I0915 07:36:43.584397   45126 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0915 07:36:43.584410   45126 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0915 07:36:43.584421   45126 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0915 07:36:43.584433   45126 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0915 07:36:43.584444   45126 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0915 07:36:43.584458   45126 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0915 07:36:43.584468   45126 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0915 07:36:43.584476   45126 command_runner.go:130] > # Cgroup setting for conmon
	I0915 07:36:43.584487   45126 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0915 07:36:43.584497   45126 command_runner.go:130] > conmon_cgroup = "pod"
	I0915 07:36:43.584509   45126 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0915 07:36:43.584520   45126 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0915 07:36:43.584533   45126 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0915 07:36:43.584543   45126 command_runner.go:130] > conmon_env = [
	I0915 07:36:43.584555   45126 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0915 07:36:43.584562   45126 command_runner.go:130] > ]
	I0915 07:36:43.584567   45126 command_runner.go:130] > # Additional environment variables to set for all the
	I0915 07:36:43.584578   45126 command_runner.go:130] > # containers. These are overridden if set in the
	I0915 07:36:43.584590   45126 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0915 07:36:43.584600   45126 command_runner.go:130] > # default_env = [
	I0915 07:36:43.584608   45126 command_runner.go:130] > # ]
	I0915 07:36:43.584622   45126 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0915 07:36:43.584637   45126 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0915 07:36:43.584645   45126 command_runner.go:130] > # selinux = false
	I0915 07:36:43.584656   45126 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0915 07:36:43.584665   45126 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0915 07:36:43.584676   45126 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0915 07:36:43.584687   45126 command_runner.go:130] > # seccomp_profile = ""
	I0915 07:36:43.584696   45126 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0915 07:36:43.584708   45126 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0915 07:36:43.584721   45126 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0915 07:36:43.584731   45126 command_runner.go:130] > # which might increase security.
	I0915 07:36:43.584742   45126 command_runner.go:130] > # This option is currently deprecated,
	I0915 07:36:43.584753   45126 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0915 07:36:43.584760   45126 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0915 07:36:43.584769   45126 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0915 07:36:43.584782   45126 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0915 07:36:43.584795   45126 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0915 07:36:43.584808   45126 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0915 07:36:43.584818   45126 command_runner.go:130] > # This option supports live configuration reload.
	I0915 07:36:43.584828   45126 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0915 07:36:43.584838   45126 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0915 07:36:43.584845   45126 command_runner.go:130] > # the cgroup blockio controller.
	I0915 07:36:43.584852   45126 command_runner.go:130] > # blockio_config_file = ""
	I0915 07:36:43.584865   45126 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0915 07:36:43.584874   45126 command_runner.go:130] > # blockio parameters.
	I0915 07:36:43.584883   45126 command_runner.go:130] > # blockio_reload = false
	I0915 07:36:43.584896   45126 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0915 07:36:43.584905   45126 command_runner.go:130] > # irqbalance daemon.
	I0915 07:36:43.584916   45126 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0915 07:36:43.584926   45126 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0915 07:36:43.584937   45126 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0915 07:36:43.584951   45126 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0915 07:36:43.584963   45126 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0915 07:36:43.584976   45126 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0915 07:36:43.585011   45126 command_runner.go:130] > # This option supports live configuration reload.
	I0915 07:36:43.585024   45126 command_runner.go:130] > # rdt_config_file = ""
	I0915 07:36:43.585033   45126 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0915 07:36:43.585044   45126 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0915 07:36:43.585066   45126 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0915 07:36:43.585076   45126 command_runner.go:130] > # separate_pull_cgroup = ""
	I0915 07:36:43.585089   45126 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0915 07:36:43.585101   45126 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0915 07:36:43.585109   45126 command_runner.go:130] > # will be added.
	I0915 07:36:43.585116   45126 command_runner.go:130] > # default_capabilities = [
	I0915 07:36:43.585122   45126 command_runner.go:130] > # 	"CHOWN",
	I0915 07:36:43.585130   45126 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0915 07:36:43.585139   45126 command_runner.go:130] > # 	"FSETID",
	I0915 07:36:43.585146   45126 command_runner.go:130] > # 	"FOWNER",
	I0915 07:36:43.585151   45126 command_runner.go:130] > # 	"SETGID",
	I0915 07:36:43.585160   45126 command_runner.go:130] > # 	"SETUID",
	I0915 07:36:43.585169   45126 command_runner.go:130] > # 	"SETPCAP",
	I0915 07:36:43.585178   45126 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0915 07:36:43.585187   45126 command_runner.go:130] > # 	"KILL",
	I0915 07:36:43.585201   45126 command_runner.go:130] > # ]
	I0915 07:36:43.585215   45126 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0915 07:36:43.585224   45126 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0915 07:36:43.585232   45126 command_runner.go:130] > # add_inheritable_capabilities = false
	I0915 07:36:43.585245   45126 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0915 07:36:43.585255   45126 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0915 07:36:43.585264   45126 command_runner.go:130] > default_sysctls = [
	I0915 07:36:43.585271   45126 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0915 07:36:43.585278   45126 command_runner.go:130] > ]
	I0915 07:36:43.585286   45126 command_runner.go:130] > # List of devices on the host that a
	I0915 07:36:43.585299   45126 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0915 07:36:43.585307   45126 command_runner.go:130] > # allowed_devices = [
	I0915 07:36:43.585314   45126 command_runner.go:130] > # 	"/dev/fuse",
	I0915 07:36:43.585317   45126 command_runner.go:130] > # ]
	I0915 07:36:43.585327   45126 command_runner.go:130] > # List of additional devices. specified as
	I0915 07:36:43.585343   45126 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0915 07:36:43.585356   45126 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0915 07:36:43.585368   45126 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0915 07:36:43.585377   45126 command_runner.go:130] > # additional_devices = [
	I0915 07:36:43.585386   45126 command_runner.go:130] > # ]
	I0915 07:36:43.585394   45126 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0915 07:36:43.585401   45126 command_runner.go:130] > # cdi_spec_dirs = [
	I0915 07:36:43.585405   45126 command_runner.go:130] > # 	"/etc/cdi",
	I0915 07:36:43.585414   45126 command_runner.go:130] > # 	"/var/run/cdi",
	I0915 07:36:43.585422   45126 command_runner.go:130] > # ]
	I0915 07:36:43.585432   45126 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0915 07:36:43.585445   45126 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0915 07:36:43.585453   45126 command_runner.go:130] > # Defaults to false.
	I0915 07:36:43.585464   45126 command_runner.go:130] > # device_ownership_from_security_context = false
	I0915 07:36:43.585476   45126 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0915 07:36:43.585487   45126 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0915 07:36:43.585493   45126 command_runner.go:130] > # hooks_dir = [
	I0915 07:36:43.585500   45126 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0915 07:36:43.585508   45126 command_runner.go:130] > # ]
	I0915 07:36:43.585521   45126 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0915 07:36:43.585534   45126 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0915 07:36:43.585545   45126 command_runner.go:130] > # its default mounts from the following two files:
	I0915 07:36:43.585552   45126 command_runner.go:130] > #
	I0915 07:36:43.585561   45126 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0915 07:36:43.585574   45126 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0915 07:36:43.585583   45126 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0915 07:36:43.585587   45126 command_runner.go:130] > #
	I0915 07:36:43.585593   45126 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0915 07:36:43.585603   45126 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0915 07:36:43.585613   45126 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0915 07:36:43.585622   45126 command_runner.go:130] > #      only add mounts it finds in this file.
	I0915 07:36:43.585626   45126 command_runner.go:130] > #
	I0915 07:36:43.585633   45126 command_runner.go:130] > # default_mounts_file = ""
	I0915 07:36:43.585642   45126 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0915 07:36:43.585653   45126 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0915 07:36:43.585662   45126 command_runner.go:130] > pids_limit = 1024
	I0915 07:36:43.585673   45126 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0915 07:36:43.585685   45126 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0915 07:36:43.585697   45126 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0915 07:36:43.585709   45126 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0915 07:36:43.585715   45126 command_runner.go:130] > # log_size_max = -1
	I0915 07:36:43.585727   45126 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0915 07:36:43.585738   45126 command_runner.go:130] > # log_to_journald = false
	I0915 07:36:43.585748   45126 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0915 07:36:43.585760   45126 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0915 07:36:43.585770   45126 command_runner.go:130] > # Path to directory for container attach sockets.
	I0915 07:36:43.585780   45126 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0915 07:36:43.585789   45126 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0915 07:36:43.585798   45126 command_runner.go:130] > # bind_mount_prefix = ""
	I0915 07:36:43.585816   45126 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0915 07:36:43.585826   45126 command_runner.go:130] > # read_only = false
	I0915 07:36:43.585836   45126 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0915 07:36:43.585849   45126 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0915 07:36:43.585859   45126 command_runner.go:130] > # live configuration reload.
	I0915 07:36:43.585866   45126 command_runner.go:130] > # log_level = "info"
	I0915 07:36:43.585877   45126 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0915 07:36:43.585888   45126 command_runner.go:130] > # This option supports live configuration reload.
	I0915 07:36:43.585894   45126 command_runner.go:130] > # log_filter = ""
	I0915 07:36:43.585903   45126 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0915 07:36:43.585914   45126 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0915 07:36:43.585923   45126 command_runner.go:130] > # separated by comma.
	I0915 07:36:43.585938   45126 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0915 07:36:43.585947   45126 command_runner.go:130] > # uid_mappings = ""
	I0915 07:36:43.585958   45126 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0915 07:36:43.585970   45126 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0915 07:36:43.585980   45126 command_runner.go:130] > # separated by comma.
	I0915 07:36:43.585991   45126 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0915 07:36:43.585997   45126 command_runner.go:130] > # gid_mappings = ""
	I0915 07:36:43.586007   45126 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0915 07:36:43.586020   45126 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0915 07:36:43.586033   45126 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0915 07:36:43.586048   45126 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0915 07:36:43.586058   45126 command_runner.go:130] > # minimum_mappable_uid = -1
	I0915 07:36:43.586070   45126 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0915 07:36:43.586085   45126 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0915 07:36:43.586096   45126 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0915 07:36:43.586111   45126 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0915 07:36:43.586120   45126 command_runner.go:130] > # minimum_mappable_gid = -1
	I0915 07:36:43.586131   45126 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0915 07:36:43.586140   45126 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0915 07:36:43.586149   45126 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0915 07:36:43.586158   45126 command_runner.go:130] > # ctr_stop_timeout = 30
	I0915 07:36:43.586170   45126 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0915 07:36:43.586182   45126 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0915 07:36:43.586190   45126 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0915 07:36:43.586203   45126 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0915 07:36:43.586214   45126 command_runner.go:130] > drop_infra_ctr = false
	I0915 07:36:43.586227   45126 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0915 07:36:43.586239   45126 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0915 07:36:43.586253   45126 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0915 07:36:43.586260   45126 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0915 07:36:43.586270   45126 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0915 07:36:43.586277   45126 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0915 07:36:43.586284   45126 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0915 07:36:43.586293   45126 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0915 07:36:43.586304   45126 command_runner.go:130] > # shared_cpuset = ""
	I0915 07:36:43.586314   45126 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0915 07:36:43.586325   45126 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0915 07:36:43.586334   45126 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0915 07:36:43.586348   45126 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0915 07:36:43.586358   45126 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0915 07:36:43.586370   45126 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0915 07:36:43.586379   45126 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0915 07:36:43.586388   45126 command_runner.go:130] > # enable_criu_support = false
	I0915 07:36:43.586396   45126 command_runner.go:130] > # Enable/disable the generation of the container,
	I0915 07:36:43.586408   45126 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0915 07:36:43.586419   45126 command_runner.go:130] > # enable_pod_events = false
	I0915 07:36:43.586438   45126 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0915 07:36:43.586450   45126 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0915 07:36:43.586546   45126 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0915 07:36:43.586563   45126 command_runner.go:130] > # default_runtime = "runc"
	I0915 07:36:43.586575   45126 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0915 07:36:43.586589   45126 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0915 07:36:43.586627   45126 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0915 07:36:43.586642   45126 command_runner.go:130] > # creation as a file is not desired either.
	I0915 07:36:43.586660   45126 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0915 07:36:43.586672   45126 command_runner.go:130] > # the hostname is being managed dynamically.
	I0915 07:36:43.586682   45126 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0915 07:36:43.586689   45126 command_runner.go:130] > # ]
	I0915 07:36:43.586699   45126 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0915 07:36:43.586709   45126 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0915 07:36:43.586719   45126 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0915 07:36:43.586730   45126 command_runner.go:130] > # Each entry in the table should follow the format:
	I0915 07:36:43.586738   45126 command_runner.go:130] > #
	I0915 07:36:43.586749   45126 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0915 07:36:43.586760   45126 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0915 07:36:43.586807   45126 command_runner.go:130] > # runtime_type = "oci"
	I0915 07:36:43.586819   45126 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0915 07:36:43.586830   45126 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0915 07:36:43.586840   45126 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0915 07:36:43.586850   45126 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0915 07:36:43.586860   45126 command_runner.go:130] > # monitor_env = []
	I0915 07:36:43.586870   45126 command_runner.go:130] > # privileged_without_host_devices = false
	I0915 07:36:43.586880   45126 command_runner.go:130] > # allowed_annotations = []
	I0915 07:36:43.586889   45126 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0915 07:36:43.586897   45126 command_runner.go:130] > # Where:
	I0915 07:36:43.586905   45126 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0915 07:36:43.586918   45126 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0915 07:36:43.586929   45126 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0915 07:36:43.586944   45126 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0915 07:36:43.586954   45126 command_runner.go:130] > #   in $PATH.
	I0915 07:36:43.586964   45126 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0915 07:36:43.586974   45126 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0915 07:36:43.586983   45126 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0915 07:36:43.586990   45126 command_runner.go:130] > #   state.
	I0915 07:36:43.586999   45126 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0915 07:36:43.587013   45126 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0915 07:36:43.587027   45126 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0915 07:36:43.587039   45126 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0915 07:36:43.587052   45126 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0915 07:36:43.587066   45126 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0915 07:36:43.587076   45126 command_runner.go:130] > #   The currently recognized values are:
	I0915 07:36:43.587083   45126 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0915 07:36:43.587106   45126 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0915 07:36:43.587120   45126 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0915 07:36:43.587130   45126 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0915 07:36:43.587145   45126 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0915 07:36:43.587158   45126 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0915 07:36:43.587172   45126 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0915 07:36:43.587184   45126 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0915 07:36:43.587193   45126 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0915 07:36:43.587202   45126 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0915 07:36:43.587213   45126 command_runner.go:130] > #   deprecated option "conmon".
	I0915 07:36:43.587224   45126 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0915 07:36:43.587236   45126 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0915 07:36:43.587250   45126 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0915 07:36:43.587262   45126 command_runner.go:130] > #   should be moved to the container's cgroup
	I0915 07:36:43.587275   45126 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0915 07:36:43.587283   45126 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0915 07:36:43.587290   45126 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0915 07:36:43.587302   45126 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0915 07:36:43.587311   45126 command_runner.go:130] > #
	I0915 07:36:43.587320   45126 command_runner.go:130] > # Using the seccomp notifier feature:
	I0915 07:36:43.587330   45126 command_runner.go:130] > #
	I0915 07:36:43.587340   45126 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0915 07:36:43.587353   45126 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0915 07:36:43.587361   45126 command_runner.go:130] > #
	I0915 07:36:43.587371   45126 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0915 07:36:43.587382   45126 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0915 07:36:43.587388   45126 command_runner.go:130] > #
	I0915 07:36:43.587397   45126 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0915 07:36:43.587405   45126 command_runner.go:130] > # feature.
	I0915 07:36:43.587411   45126 command_runner.go:130] > #
	I0915 07:36:43.587420   45126 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0915 07:36:43.587433   45126 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0915 07:36:43.587446   45126 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0915 07:36:43.587458   45126 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0915 07:36:43.587471   45126 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0915 07:36:43.587478   45126 command_runner.go:130] > #
	I0915 07:36:43.587485   45126 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0915 07:36:43.587498   45126 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0915 07:36:43.587507   45126 command_runner.go:130] > #
	I0915 07:36:43.587517   45126 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0915 07:36:43.587529   45126 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0915 07:36:43.587537   45126 command_runner.go:130] > #
	I0915 07:36:43.587547   45126 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0915 07:36:43.587559   45126 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0915 07:36:43.587569   45126 command_runner.go:130] > # limitation.
	I0915 07:36:43.587578   45126 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0915 07:36:43.587584   45126 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0915 07:36:43.587591   45126 command_runner.go:130] > runtime_type = "oci"
	I0915 07:36:43.587601   45126 command_runner.go:130] > runtime_root = "/run/runc"
	I0915 07:36:43.587611   45126 command_runner.go:130] > runtime_config_path = ""
	I0915 07:36:43.587621   45126 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0915 07:36:43.587630   45126 command_runner.go:130] > monitor_cgroup = "pod"
	I0915 07:36:43.587640   45126 command_runner.go:130] > monitor_exec_cgroup = ""
	I0915 07:36:43.587650   45126 command_runner.go:130] > monitor_env = [
	I0915 07:36:43.587661   45126 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0915 07:36:43.587667   45126 command_runner.go:130] > ]
	I0915 07:36:43.587674   45126 command_runner.go:130] > privileged_without_host_devices = false
	I0915 07:36:43.587687   45126 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0915 07:36:43.587699   45126 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0915 07:36:43.587710   45126 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0915 07:36:43.587725   45126 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0915 07:36:43.587739   45126 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0915 07:36:43.587752   45126 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0915 07:36:43.587767   45126 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0915 07:36:43.587779   45126 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0915 07:36:43.587791   45126 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0915 07:36:43.587805   45126 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0915 07:36:43.587815   45126 command_runner.go:130] > # Example:
	I0915 07:36:43.587825   45126 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0915 07:36:43.587836   45126 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0915 07:36:43.587846   45126 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0915 07:36:43.587857   45126 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0915 07:36:43.587865   45126 command_runner.go:130] > # cpuset = 0
	I0915 07:36:43.587869   45126 command_runner.go:130] > # cpushares = "0-1"
	I0915 07:36:43.587875   45126 command_runner.go:130] > # Where:
	I0915 07:36:43.587882   45126 command_runner.go:130] > # The workload name is workload-type.
	I0915 07:36:43.587897   45126 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0915 07:36:43.587910   45126 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0915 07:36:43.587922   45126 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0915 07:36:43.587937   45126 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0915 07:36:43.587949   45126 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0915 07:36:43.587957   45126 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0915 07:36:43.587968   45126 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0915 07:36:43.587978   45126 command_runner.go:130] > # Default value is set to true
	I0915 07:36:43.587989   45126 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0915 07:36:43.588000   45126 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0915 07:36:43.588011   45126 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0915 07:36:43.588020   45126 command_runner.go:130] > # Default value is set to 'false'
	I0915 07:36:43.588031   45126 command_runner.go:130] > # disable_hostport_mapping = false
	I0915 07:36:43.588039   45126 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0915 07:36:43.588042   45126 command_runner.go:130] > #
	I0915 07:36:43.588049   45126 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0915 07:36:43.588059   45126 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0915 07:36:43.588070   45126 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0915 07:36:43.588079   45126 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0915 07:36:43.588088   45126 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0915 07:36:43.588093   45126 command_runner.go:130] > [crio.image]
	I0915 07:36:43.588127   45126 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0915 07:36:43.588133   45126 command_runner.go:130] > # default_transport = "docker://"
	I0915 07:36:43.588142   45126 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0915 07:36:43.588153   45126 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0915 07:36:43.588159   45126 command_runner.go:130] > # global_auth_file = ""
	I0915 07:36:43.588167   45126 command_runner.go:130] > # The image used to instantiate infra containers.
	I0915 07:36:43.588175   45126 command_runner.go:130] > # This option supports live configuration reload.
	I0915 07:36:43.588182   45126 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0915 07:36:43.588192   45126 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0915 07:36:43.588201   45126 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0915 07:36:43.588208   45126 command_runner.go:130] > # This option supports live configuration reload.
	I0915 07:36:43.588213   45126 command_runner.go:130] > # pause_image_auth_file = ""
	I0915 07:36:43.588219   45126 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0915 07:36:43.588229   45126 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0915 07:36:43.588242   45126 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0915 07:36:43.588252   45126 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0915 07:36:43.588263   45126 command_runner.go:130] > # pause_command = "/pause"
	I0915 07:36:43.588272   45126 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0915 07:36:43.588284   45126 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0915 07:36:43.588295   45126 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0915 07:36:43.588308   45126 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0915 07:36:43.588317   45126 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0915 07:36:43.588330   45126 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0915 07:36:43.588340   45126 command_runner.go:130] > # pinned_images = [
	I0915 07:36:43.588346   45126 command_runner.go:130] > # ]
	I0915 07:36:43.588358   45126 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0915 07:36:43.588372   45126 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0915 07:36:43.588385   45126 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0915 07:36:43.588398   45126 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0915 07:36:43.588409   45126 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0915 07:36:43.588417   45126 command_runner.go:130] > # signature_policy = ""
	I0915 07:36:43.588423   45126 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0915 07:36:43.588436   45126 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0915 07:36:43.588449   45126 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0915 07:36:43.588463   45126 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0915 07:36:43.588475   45126 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0915 07:36:43.588485   45126 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0915 07:36:43.588498   45126 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0915 07:36:43.588510   45126 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0915 07:36:43.588517   45126 command_runner.go:130] > # changing them here.
	I0915 07:36:43.588522   45126 command_runner.go:130] > # insecure_registries = [
	I0915 07:36:43.588529   45126 command_runner.go:130] > # ]
	I0915 07:36:43.588540   45126 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0915 07:36:43.588550   45126 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0915 07:36:43.588557   45126 command_runner.go:130] > # image_volumes = "mkdir"
	I0915 07:36:43.588569   45126 command_runner.go:130] > # Temporary directory to use for storing big files
	I0915 07:36:43.588579   45126 command_runner.go:130] > # big_files_temporary_dir = ""
	I0915 07:36:43.588591   45126 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0915 07:36:43.588600   45126 command_runner.go:130] > # CNI plugins.
	I0915 07:36:43.588609   45126 command_runner.go:130] > [crio.network]
	I0915 07:36:43.588619   45126 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0915 07:36:43.588628   45126 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0915 07:36:43.588637   45126 command_runner.go:130] > # cni_default_network = ""
	I0915 07:36:43.588650   45126 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0915 07:36:43.588660   45126 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0915 07:36:43.588675   45126 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0915 07:36:43.588685   45126 command_runner.go:130] > # plugin_dirs = [
	I0915 07:36:43.588694   45126 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0915 07:36:43.588703   45126 command_runner.go:130] > # ]
	I0915 07:36:43.588713   45126 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0915 07:36:43.588719   45126 command_runner.go:130] > [crio.metrics]
	I0915 07:36:43.588727   45126 command_runner.go:130] > # Globally enable or disable metrics support.
	I0915 07:36:43.588736   45126 command_runner.go:130] > enable_metrics = true
	I0915 07:36:43.588746   45126 command_runner.go:130] > # Specify enabled metrics collectors.
	I0915 07:36:43.588754   45126 command_runner.go:130] > # Per default all metrics are enabled.
	I0915 07:36:43.588767   45126 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0915 07:36:43.588780   45126 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0915 07:36:43.588792   45126 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0915 07:36:43.588801   45126 command_runner.go:130] > # metrics_collectors = [
	I0915 07:36:43.588809   45126 command_runner.go:130] > # 	"operations",
	I0915 07:36:43.588817   45126 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0915 07:36:43.588823   45126 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0915 07:36:43.588832   45126 command_runner.go:130] > # 	"operations_errors",
	I0915 07:36:43.588843   45126 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0915 07:36:43.588849   45126 command_runner.go:130] > # 	"image_pulls_by_name",
	I0915 07:36:43.588860   45126 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0915 07:36:43.588870   45126 command_runner.go:130] > # 	"image_pulls_failures",
	I0915 07:36:43.588880   45126 command_runner.go:130] > # 	"image_pulls_successes",
	I0915 07:36:43.588889   45126 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0915 07:36:43.588900   45126 command_runner.go:130] > # 	"image_layer_reuse",
	I0915 07:36:43.588908   45126 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0915 07:36:43.588915   45126 command_runner.go:130] > # 	"containers_oom_total",
	I0915 07:36:43.588920   45126 command_runner.go:130] > # 	"containers_oom",
	I0915 07:36:43.588929   45126 command_runner.go:130] > # 	"processes_defunct",
	I0915 07:36:43.588938   45126 command_runner.go:130] > # 	"operations_total",
	I0915 07:36:43.588946   45126 command_runner.go:130] > # 	"operations_latency_seconds",
	I0915 07:36:43.588957   45126 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0915 07:36:43.588967   45126 command_runner.go:130] > # 	"operations_errors_total",
	I0915 07:36:43.588977   45126 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0915 07:36:43.588994   45126 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0915 07:36:43.589003   45126 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0915 07:36:43.589011   45126 command_runner.go:130] > # 	"image_pulls_success_total",
	I0915 07:36:43.589015   45126 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0915 07:36:43.589025   45126 command_runner.go:130] > # 	"containers_oom_count_total",
	I0915 07:36:43.589043   45126 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0915 07:36:43.589054   45126 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0915 07:36:43.589062   45126 command_runner.go:130] > # ]
	I0915 07:36:43.589070   45126 command_runner.go:130] > # The port on which the metrics server will listen.
	I0915 07:36:43.589079   45126 command_runner.go:130] > # metrics_port = 9090
	I0915 07:36:43.589091   45126 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0915 07:36:43.589102   45126 command_runner.go:130] > # metrics_socket = ""
	I0915 07:36:43.589110   45126 command_runner.go:130] > # The certificate for the secure metrics server.
	I0915 07:36:43.589119   45126 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0915 07:36:43.589132   45126 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0915 07:36:43.589143   45126 command_runner.go:130] > # certificate on any modification event.
	I0915 07:36:43.589153   45126 command_runner.go:130] > # metrics_cert = ""
	I0915 07:36:43.589165   45126 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0915 07:36:43.589175   45126 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0915 07:36:43.589185   45126 command_runner.go:130] > # metrics_key = ""
	I0915 07:36:43.589195   45126 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0915 07:36:43.589202   45126 command_runner.go:130] > [crio.tracing]
	I0915 07:36:43.589210   45126 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0915 07:36:43.589220   45126 command_runner.go:130] > # enable_tracing = false
	I0915 07:36:43.589230   45126 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0915 07:36:43.589240   45126 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0915 07:36:43.589251   45126 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0915 07:36:43.589261   45126 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0915 07:36:43.589268   45126 command_runner.go:130] > # CRI-O NRI configuration.
	I0915 07:36:43.589277   45126 command_runner.go:130] > [crio.nri]
	I0915 07:36:43.589285   45126 command_runner.go:130] > # Globally enable or disable NRI.
	I0915 07:36:43.589292   45126 command_runner.go:130] > # enable_nri = false
	I0915 07:36:43.589297   45126 command_runner.go:130] > # NRI socket to listen on.
	I0915 07:36:43.589308   45126 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0915 07:36:43.589318   45126 command_runner.go:130] > # NRI plugin directory to use.
	I0915 07:36:43.589326   45126 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0915 07:36:43.589337   45126 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0915 07:36:43.589347   45126 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0915 07:36:43.589359   45126 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0915 07:36:43.589368   45126 command_runner.go:130] > # nri_disable_connections = false
	I0915 07:36:43.589379   45126 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0915 07:36:43.589387   45126 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0915 07:36:43.589393   45126 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0915 07:36:43.589402   45126 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0915 07:36:43.589415   45126 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0915 07:36:43.589425   45126 command_runner.go:130] > [crio.stats]
	I0915 07:36:43.589438   45126 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0915 07:36:43.589449   45126 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0915 07:36:43.589458   45126 command_runner.go:130] > # stats_collection_period = 0
	I0915 07:36:43.589546   45126 cni.go:84] Creating CNI manager for ""
	I0915 07:36:43.589560   45126 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0915 07:36:43.589570   45126 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 07:36:43.589597   45126 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.241 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-127008 NodeName:multinode-127008 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 07:36:43.589754   45126 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-127008"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 07:36:43.589837   45126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 07:36:43.600240   45126 command_runner.go:130] > kubeadm
	I0915 07:36:43.600261   45126 command_runner.go:130] > kubectl
	I0915 07:36:43.600267   45126 command_runner.go:130] > kubelet
	I0915 07:36:43.600331   45126 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 07:36:43.600404   45126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 07:36:43.610475   45126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0915 07:36:43.627783   45126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:36:43.645189   45126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0915 07:36:43.662790   45126 ssh_runner.go:195] Run: grep 192.168.39.241	control-plane.minikube.internal$ /etc/hosts
	I0915 07:36:43.666711   45126 command_runner.go:130] > 192.168.39.241	control-plane.minikube.internal
	I0915 07:36:43.666863   45126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:36:43.805470   45126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:36:43.821706   45126 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008 for IP: 192.168.39.241
	I0915 07:36:43.821728   45126 certs.go:194] generating shared ca certs ...
	I0915 07:36:43.821744   45126 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:36:43.821927   45126 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 07:36:43.821980   45126 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 07:36:43.821994   45126 certs.go:256] generating profile certs ...
	I0915 07:36:43.822098   45126 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/client.key
	I0915 07:36:43.822176   45126 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/apiserver.key.e0ebbffb
	I0915 07:36:43.822238   45126 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/proxy-client.key
	I0915 07:36:43.822251   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0915 07:36:43.822271   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0915 07:36:43.822289   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0915 07:36:43.822308   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0915 07:36:43.822323   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0915 07:36:43.822341   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0915 07:36:43.822360   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0915 07:36:43.822378   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0915 07:36:43.822435   45126 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 07:36:43.822481   45126 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 07:36:43.822494   45126 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:36:43.822522   45126 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:36:43.822558   45126 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:36:43.822588   45126 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 07:36:43.822640   45126 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:36:43.822683   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem -> /usr/share/ca-certificates/13190.pem
	I0915 07:36:43.822704   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> /usr/share/ca-certificates/131902.pem
	I0915 07:36:43.822724   45126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:36:43.823374   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:36:43.850059   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:36:43.874963   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:36:43.898575   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 07:36:43.922419   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0915 07:36:43.946750   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 07:36:43.971550   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 07:36:43.995255   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/multinode-127008/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 07:36:44.018731   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 07:36:44.042915   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 07:36:44.066718   45126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:36:44.090805   45126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 07:36:44.107506   45126 ssh_runner.go:195] Run: openssl version
	I0915 07:36:44.113255   45126 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0915 07:36:44.113540   45126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 07:36:44.125078   45126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 07:36:44.129857   45126 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 07:36:44.129887   45126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 07:36:44.129929   45126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 07:36:44.135665   45126 command_runner.go:130] > 3ec20f2e
	I0915 07:36:44.135732   45126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:36:44.145180   45126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:36:44.156030   45126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:36:44.160579   45126 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:36:44.160741   45126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:36:44.160820   45126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:36:44.166715   45126 command_runner.go:130] > b5213941
	I0915 07:36:44.166771   45126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:36:44.176518   45126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 07:36:44.187515   45126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 07:36:44.192245   45126 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 07:36:44.192270   45126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 07:36:44.192302   45126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 07:36:44.197899   45126 command_runner.go:130] > 51391683
	I0915 07:36:44.197950   45126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 07:36:44.207252   45126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:36:44.211785   45126 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:36:44.211807   45126 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0915 07:36:44.211815   45126 command_runner.go:130] > Device: 253,1	Inode: 531240      Links: 1
	I0915 07:36:44.211825   45126 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0915 07:36:44.211848   45126 command_runner.go:130] > Access: 2024-09-15 07:29:51.813189836 +0000
	I0915 07:36:44.211856   45126 command_runner.go:130] > Modify: 2024-09-15 07:29:51.813189836 +0000
	I0915 07:36:44.211861   45126 command_runner.go:130] > Change: 2024-09-15 07:29:51.813189836 +0000
	I0915 07:36:44.211867   45126 command_runner.go:130] >  Birth: 2024-09-15 07:29:51.813189836 +0000
	I0915 07:36:44.211999   45126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 07:36:44.217476   45126 command_runner.go:130] > Certificate will not expire
	I0915 07:36:44.217684   45126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 07:36:44.223380   45126 command_runner.go:130] > Certificate will not expire
	I0915 07:36:44.223450   45126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 07:36:44.229025   45126 command_runner.go:130] > Certificate will not expire
	I0915 07:36:44.229078   45126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 07:36:44.234739   45126 command_runner.go:130] > Certificate will not expire
	I0915 07:36:44.234929   45126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 07:36:44.240253   45126 command_runner.go:130] > Certificate will not expire
	I0915 07:36:44.240508   45126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 07:36:44.245770   45126 command_runner.go:130] > Certificate will not expire
	I0915 07:36:44.245988   45126 kubeadm.go:392] StartCluster: {Name:multinode-127008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-127008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:36:44.246081   45126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 07:36:44.246116   45126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 07:36:44.281938   45126 command_runner.go:130] > 55950a0433ba081fbfb4cb4a3ac6a434688f2ce4d46c43b8cafaa70bbbdf15b6
	I0915 07:36:44.281960   45126 command_runner.go:130] > ac59c00839a05466aafe55897170f04c23d2e286c86e120536f464faa1bef2b7
	I0915 07:36:44.281966   45126 command_runner.go:130] > a92f5779f5c6326bc828958742aa89ea6fa0a05017c60e8542b49f27e59f2473
	I0915 07:36:44.281972   45126 command_runner.go:130] > 55cc3a66166ca6b1b2346612cbb83130236bdf87c321fa7d3873f51361bd2a01
	I0915 07:36:44.281978   45126 command_runner.go:130] > 63e0b614cde44fb2193d901af66ec3799ef5a9d4f531ee045ac8fccaea9d7ce6
	I0915 07:36:44.281985   45126 command_runner.go:130] > 672943905b036b90e6fbbc428e9e8f99b446eca500d27bcb27118d86dc069e9e
	I0915 07:36:44.281991   45126 command_runner.go:130] > 80fe08f547568769f96d52df19d48c5bc2f22baa9173c1b69bce841801609df7
	I0915 07:36:44.282007   45126 command_runner.go:130] > fd304bb04be08dee1ece6b9daec0e0a3b587079dfff4e56cfbaf506e8083f3fe
	I0915 07:36:44.282017   45126 command_runner.go:130] > 39a551c82457450de7c1c877a7fb4eb1c93fe650148b1fd23b591df91f2ebaa0
	I0915 07:36:44.282040   45126 cri.go:89] found id: "55950a0433ba081fbfb4cb4a3ac6a434688f2ce4d46c43b8cafaa70bbbdf15b6"
	I0915 07:36:44.282052   45126 cri.go:89] found id: "ac59c00839a05466aafe55897170f04c23d2e286c86e120536f464faa1bef2b7"
	I0915 07:36:44.282057   45126 cri.go:89] found id: "a92f5779f5c6326bc828958742aa89ea6fa0a05017c60e8542b49f27e59f2473"
	I0915 07:36:44.282061   45126 cri.go:89] found id: "55cc3a66166ca6b1b2346612cbb83130236bdf87c321fa7d3873f51361bd2a01"
	I0915 07:36:44.282064   45126 cri.go:89] found id: "63e0b614cde44fb2193d901af66ec3799ef5a9d4f531ee045ac8fccaea9d7ce6"
	I0915 07:36:44.282068   45126 cri.go:89] found id: "672943905b036b90e6fbbc428e9e8f99b446eca500d27bcb27118d86dc069e9e"
	I0915 07:36:44.282071   45126 cri.go:89] found id: "80fe08f547568769f96d52df19d48c5bc2f22baa9173c1b69bce841801609df7"
	I0915 07:36:44.282074   45126 cri.go:89] found id: "fd304bb04be08dee1ece6b9daec0e0a3b587079dfff4e56cfbaf506e8083f3fe"
	I0915 07:36:44.282076   45126 cri.go:89] found id: "39a551c82457450de7c1c877a7fb4eb1c93fe650148b1fd23b591df91f2ebaa0"
	I0915 07:36:44.282082   45126 cri.go:89] found id: ""
	I0915 07:36:44.282125   45126 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.608686660Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c725ce85-60f4-43f4-b332-1f1970ea04fc name=/runtime.v1.RuntimeService/Version
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.609563490Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9dc226eb-d14c-46c3-9515-2b7ceb326c9f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.610023892Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386054609995750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dc226eb-d14c-46c3-9515-2b7ceb326c9f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.616696791Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6a126ef-d8da-4df1-900c-738be4489e49 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.616777967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6a126ef-d8da-4df1-900c-738be4489e49 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.618265870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37c2fd09af8f0181d1ea3604c03ec79736d7ead318710eb250ce02f69b9a4c83,PodSandboxId:4aef18e039dfecea913bd72e9cb01f718a234bda8adf8d15cea528bf7b1e008f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726385845143134698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c401cb18134d904ba51c09fdae3067ebe5ea5f0b1f312f7eb9cea3ab923c02b1,PodSandboxId:5586a7fefd58b11d465b192fcaf4a9b4ded14fd2cda739bf04f03728e516c443,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726385811701507031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905abdc62484b035aca2d6076cbfdb8df7d038907df7d81f31bcf7d2aa25f587,PodSandboxId:6cd3f10857e40eb3f7b0a238b8d6bf26b4cb63f73f86169a6f248c4fbcfc7b0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726385811718025162,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0d3de8-3336-4820-b100-436f1
8b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f01d79f94bbd11d2c227050432b502f9528822bb531053e7c84dcff22037b6,PodSandboxId:47be1c612fc77bf63cfed388d59ec387c4bb60d4868c4420f8bb9b5c6852e64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726385811548331382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8350dbed2d0e1e1449148904a84fa5b148acd2edc069a29cf5e44d0d79d3193,PodSandboxId:f8183a3783fad4f63edb442e50c0a975dc478e5f5670ddfb99ae1a269834cc3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726385811469377886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b-feebd1f83d34,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db0b1951a78896466f2770710416685f0959d2509e4844f54eed621bf61c914,PodSandboxId:477e40c71a816a727cfd80b4c5cae7961dbfc025b9b8e5250340b348cfdff29d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726385806641623189,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e199d57146177ff6ced3b5d5b3f612263313b1500a411a40e7342f53e3239d98,PodSandboxId:2535da0e5974604ded97098bbf7f68538f8d7e6e28159b0d421759f577654568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726385806600485267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c69b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07512bfcb800f20ed88f464a40a8bc7847f85a444f8bf9ac3c6efa4936c7cee1,PodSandboxId:e521cda9505b9f96578f12e044aa7bad94754a006d4b8752a7389d39f406d3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726385806606148796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e470f2131890f0fd11a999a522fa58b7343b5b1eaf5220eb071c462ca05ba3de,PodSandboxId:0f7c19e1f862abeecab0b049f2c092908606f0a6afa3fc1698353623e8da72c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726385806544572893,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7dfe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55950a0433ba081fbfb4cb4a3ac6a434688f2ce4d46c43b8cafaa70bbbdf15b6,PodSandboxId:3bf86a80db60ba44a63d8b82f6fa328c55f380885bfeb3a5ebfbb91c0b00176b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726385792958007980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb42e6614b1dc34434df7c6ef272ae815c4e82a1d1a3336d5f2ad81860e364,PodSandboxId:31fa412bfc060d26df0e26abdab1f36377f3d1eb7409726fcf7e0029d5f9b1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726385475816744582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a92f5779f5c6326bc828958742aa89ea6fa0a05017c60e8542b49f27e59f2473,PodSandboxId:fa08f2e1ecce819a899d69e83bdde2cdd942c474b3b6f2ccf6671b180bf6d49b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726385418364291532,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cc3a66166ca6b1b2346612cbb83130236bdf87c321fa7d3873f51361bd2a01,PodSandboxId:1fe92302cde426d3ab2b0fa0ed0d76907b9f0e8ad6e6ee5270c5423823417c29,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726385406483713292,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bb0d3de8-3336-4820-b100-436f18b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e0b614cde44fb2193d901af66ec3799ef5a9d4f531ee045ac8fccaea9d7ce6,PodSandboxId:e8bd420d8e45e36db607f0a49fb9735ddd7f9b648788639ebee39da47a8f9761,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726385406269655972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b
-feebd1f83d34,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672943905b036b90e6fbbc428e9e8f99b446eca500d27bcb27118d86dc069e9e,PodSandboxId:3fe28e7fa0bc1593a88b75a1ca0aab3fc9a8510289dd8cf499233921d76b541d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726385395508565004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7d
fe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd304bb04be08dee1ece6b9daec0e0a3b587079dfff4e56cfbaf506e8083f3fe,PodSandboxId:4051b763c60f6f8efe20854c7fd2da62d852f5434ce440f88cd7ba8c8082cba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726385395501235325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c6
9b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fe08f547568769f96d52df19d48c5bc2f22baa9173c1b69bce841801609df7,PodSandboxId:1706a91c6cc99bda49accd3428e7f61a966e864beac7f9fd296fc6e5201d53e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726385395506397104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a551c82457450de7c1c877a7fb4eb1c93fe650148b1fd23b591df91f2ebaa0,PodSandboxId:b9d703577515d64b5fc6ca9667cf8407a1253f866e24decda285378f0016a62c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726385395433160552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c6a126ef-d8da-4df1-900c-738be4489e49 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.660040919Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=e868a7dd-b2f8-48c8-b7a7-2e8efa14710c name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.660487382Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4aef18e039dfecea913bd72e9cb01f718a234bda8adf8d15cea528bf7b1e008f,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-zzxt7,Uid:0efb9514-adf1-47c0-88b6-6a2cc864f5f4,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726385845005992896,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:36:50.847866641Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5586a7fefd58b11d465b192fcaf4a9b4ded14fd2cda739bf04f03728e516c443,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-q9c49,Uid:6d81ba1a-2068-472c-ad61-31bb95fa15c9,Namespace:kube-system,Attempt:2,}
,State:SANDBOX_READY,CreatedAt:1726385811278593588,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:36:50.847867851Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6cd3f10857e40eb3f7b0a238b8d6bf26b4cb63f73f86169a6f248c4fbcfc7b0a,Metadata:&PodSandboxMetadata{Name:kindnet-jxp4h,Uid:bb0d3de8-3336-4820-b100-436f18b71976,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726385811233920428,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0d3de8-3336-4820-b100-436f18b71976,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map
[string]string{kubernetes.io/config.seen: 2024-09-15T07:36:50.847856546Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47be1c612fc77bf63cfed388d59ec387c4bb60d4868c4420f8bb9b5c6852e64f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d96e6d53-fc72-4e99-9472-374a5a0ca92e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726385811198462838,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{
\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-15T07:36:50.847865242Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f8183a3783fad4f63edb442e50c0a975dc478e5f5670ddfb99ae1a269834cc3a,Metadata:&PodSandboxMetadata{Name:kube-proxy-57hqd,Uid:d72582fe-88ed-40b5-b13b-feebd1f83d34,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726385811172947165,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b-feebd1f83d34,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:36:50.847862799Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:477e40c71a816a727cfd80b4c5cae7961dbfc025b9b8e5250340b348cfdff29d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-127008,Uid:0642b09dcf6b045abb0cdfb6f7dc866d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726385806390506526,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.241:8443,kubernetes.io/config.hash: 0642b09dcf6b045abb0cdfb6f7dc866d,kubernetes.io/config.seen: 2024-09-15T07:36:45.860969682Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e521cda9505b9f96578f12e044aa7bad94
754a006d4b8752a7389d39f406d3e2,Metadata:&PodSandboxMetadata{Name:etcd-multinode-127008,Uid:310e1fd971ec50ca002b32392fdd948f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726385806385932526,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.241:2379,kubernetes.io/config.hash: 310e1fd971ec50ca002b32392fdd948f,kubernetes.io/config.seen: 2024-09-15T07:36:45.860976641Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0f7c19e1f862abeecab0b049f2c092908606f0a6afa3fc1698353623e8da72c6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-127008,Uid:5142f7dfe2b1054017e1481fe790ad09,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726385806383624823,Labels:map[string]st
ring{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7dfe2b1054017e1481fe790ad09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5142f7dfe2b1054017e1481fe790ad09,kubernetes.io/config.seen: 2024-09-15T07:36:45.860973690Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2535da0e5974604ded97098bbf7f68538f8d7e6e28159b0d421759f577654568,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-127008,Uid:ccda2fa119113ff2f2a0c69b57343842,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726385806382051993,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c69b57343842,tier: control-plane,},Annotations:map[string]string{kuberne
tes.io/config.hash: ccda2fa119113ff2f2a0c69b57343842,kubernetes.io/config.seen: 2024-09-15T07:36:45.860975614Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3bf86a80db60ba44a63d8b82f6fa328c55f380885bfeb3a5ebfbb91c0b00176b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-q9c49,Uid:6d81ba1a-2068-472c-ad61-31bb95fa15c9,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726385792786082855,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:30:17.910930513Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:31fa412bfc060d26df0e26abdab1f36377f3d1eb7409726fcf7e0029d5f9b1d3,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-zzxt7,Uid:0efb9514-adf1-47c0-88b6-6a2cc864f5f4,Namespace:def
ault,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726385472395709276,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:31:12.084970412Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fa08f2e1ecce819a899d69e83bdde2cdd942c474b3b6f2ccf6671b180bf6d49b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d96e6d53-fc72-4e99-9472-374a5a0ca92e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726385418215321640,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},Annotations:map[
string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-15T07:30:17.906941232Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8bd420d8e45e36db607f0a49fb9735ddd7f9b648788639ebee39da47a8f9761,Metadata:&PodSandboxMetadata{Name:kube-proxy-57hqd,Uid:d72582fe-88ed-40b5-b13b-feebd1f83d34,Namespace:kube-
system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726385405879287211,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b-feebd1f83d34,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:30:05.549644666Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1fe92302cde426d3ab2b0fa0ed0d76907b9f0e8ad6e6ee5270c5423823417c29,Metadata:&PodSandboxMetadata{Name:kindnet-jxp4h,Uid:bb0d3de8-3336-4820-b100-436f18b71976,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726385405870842758,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0d3de8-3336-4820-b100-436f18b71976,k8s-app: kindnet,pod
-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-15T07:30:05.539021929Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b9d703577515d64b5fc6ca9667cf8407a1253f866e24decda285378f0016a62c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-127008,Uid:0642b09dcf6b045abb0cdfb6f7dc866d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726385395274608266,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.241:8443,kubernetes.io/config.hash: 0642b09dcf6b045abb0cdfb6f7dc866d,kubernetes.io/config.seen: 2024-09-15T07:29:54.811959617Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3fe28e7fa0bc15
93a88b75a1ca0aab3fc9a8510289dd8cf499233921d76b541d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-127008,Uid:5142f7dfe2b1054017e1481fe790ad09,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726385395273997443,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7dfe2b1054017e1481fe790ad09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5142f7dfe2b1054017e1481fe790ad09,kubernetes.io/config.seen: 2024-09-15T07:29:54.811960632Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1706a91c6cc99bda49accd3428e7f61a966e864beac7f9fd296fc6e5201d53e0,Metadata:&PodSandboxMetadata{Name:etcd-multinode-127008,Uid:310e1fd971ec50ca002b32392fdd948f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726385395270124075,Labels:map[string]string{component
: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.241:2379,kubernetes.io/config.hash: 310e1fd971ec50ca002b32392fdd948f,kubernetes.io/config.seen: 2024-09-15T07:29:54.811956005Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4051b763c60f6f8efe20854c7fd2da62d852f5434ce440f88cd7ba8c8082cba3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-127008,Uid:ccda2fa119113ff2f2a0c69b57343842,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726385395268492713,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c69b57343842,tier: control-plane,},Annotati
ons:map[string]string{kubernetes.io/config.hash: ccda2fa119113ff2f2a0c69b57343842,kubernetes.io/config.seen: 2024-09-15T07:29:54.811961453Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e868a7dd-b2f8-48c8-b7a7-2e8efa14710c name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.661405199Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a84ac2f8-b67e-419a-9e9f-d8423abcef3d name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.661468584Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a84ac2f8-b67e-419a-9e9f-d8423abcef3d name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.661796993Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37c2fd09af8f0181d1ea3604c03ec79736d7ead318710eb250ce02f69b9a4c83,PodSandboxId:4aef18e039dfecea913bd72e9cb01f718a234bda8adf8d15cea528bf7b1e008f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726385845143134698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c401cb18134d904ba51c09fdae3067ebe5ea5f0b1f312f7eb9cea3ab923c02b1,PodSandboxId:5586a7fefd58b11d465b192fcaf4a9b4ded14fd2cda739bf04f03728e516c443,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726385811701507031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905abdc62484b035aca2d6076cbfdb8df7d038907df7d81f31bcf7d2aa25f587,PodSandboxId:6cd3f10857e40eb3f7b0a238b8d6bf26b4cb63f73f86169a6f248c4fbcfc7b0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726385811718025162,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0d3de8-3336-4820-b100-436f1
8b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f01d79f94bbd11d2c227050432b502f9528822bb531053e7c84dcff22037b6,PodSandboxId:47be1c612fc77bf63cfed388d59ec387c4bb60d4868c4420f8bb9b5c6852e64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726385811548331382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8350dbed2d0e1e1449148904a84fa5b148acd2edc069a29cf5e44d0d79d3193,PodSandboxId:f8183a3783fad4f63edb442e50c0a975dc478e5f5670ddfb99ae1a269834cc3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726385811469377886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b-feebd1f83d34,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db0b1951a78896466f2770710416685f0959d2509e4844f54eed621bf61c914,PodSandboxId:477e40c71a816a727cfd80b4c5cae7961dbfc025b9b8e5250340b348cfdff29d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726385806641623189,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e199d57146177ff6ced3b5d5b3f612263313b1500a411a40e7342f53e3239d98,PodSandboxId:2535da0e5974604ded97098bbf7f68538f8d7e6e28159b0d421759f577654568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726385806600485267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c69b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07512bfcb800f20ed88f464a40a8bc7847f85a444f8bf9ac3c6efa4936c7cee1,PodSandboxId:e521cda9505b9f96578f12e044aa7bad94754a006d4b8752a7389d39f406d3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726385806606148796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e470f2131890f0fd11a999a522fa58b7343b5b1eaf5220eb071c462ca05ba3de,PodSandboxId:0f7c19e1f862abeecab0b049f2c092908606f0a6afa3fc1698353623e8da72c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726385806544572893,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7dfe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55950a0433ba081fbfb4cb4a3ac6a434688f2ce4d46c43b8cafaa70bbbdf15b6,PodSandboxId:3bf86a80db60ba44a63d8b82f6fa328c55f380885bfeb3a5ebfbb91c0b00176b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726385792958007980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb42e6614b1dc34434df7c6ef272ae815c4e82a1d1a3336d5f2ad81860e364,PodSandboxId:31fa412bfc060d26df0e26abdab1f36377f3d1eb7409726fcf7e0029d5f9b1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726385475816744582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a92f5779f5c6326bc828958742aa89ea6fa0a05017c60e8542b49f27e59f2473,PodSandboxId:fa08f2e1ecce819a899d69e83bdde2cdd942c474b3b6f2ccf6671b180bf6d49b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726385418364291532,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cc3a66166ca6b1b2346612cbb83130236bdf87c321fa7d3873f51361bd2a01,PodSandboxId:1fe92302cde426d3ab2b0fa0ed0d76907b9f0e8ad6e6ee5270c5423823417c29,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726385406483713292,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bb0d3de8-3336-4820-b100-436f18b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e0b614cde44fb2193d901af66ec3799ef5a9d4f531ee045ac8fccaea9d7ce6,PodSandboxId:e8bd420d8e45e36db607f0a49fb9735ddd7f9b648788639ebee39da47a8f9761,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726385406269655972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b
-feebd1f83d34,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672943905b036b90e6fbbc428e9e8f99b446eca500d27bcb27118d86dc069e9e,PodSandboxId:3fe28e7fa0bc1593a88b75a1ca0aab3fc9a8510289dd8cf499233921d76b541d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726385395508565004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7d
fe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd304bb04be08dee1ece6b9daec0e0a3b587079dfff4e56cfbaf506e8083f3fe,PodSandboxId:4051b763c60f6f8efe20854c7fd2da62d852f5434ce440f88cd7ba8c8082cba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726385395501235325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c6
9b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fe08f547568769f96d52df19d48c5bc2f22baa9173c1b69bce841801609df7,PodSandboxId:1706a91c6cc99bda49accd3428e7f61a966e864beac7f9fd296fc6e5201d53e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726385395506397104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a551c82457450de7c1c877a7fb4eb1c93fe650148b1fd23b591df91f2ebaa0,PodSandboxId:b9d703577515d64b5fc6ca9667cf8407a1253f866e24decda285378f0016a62c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726385395433160552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a84ac2f8-b67e-419a-9e9f-d8423abcef3d name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.673045675Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74b68737-4a4f-460d-b951-6262ba283393 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.673115273Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74b68737-4a4f-460d-b951-6262ba283393 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.674642825Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34fe36d5-9735-4206-b139-1e0841c5ea5d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.675031490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386054675011478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34fe36d5-9735-4206-b139-1e0841c5ea5d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.675689546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6632ccd-0a8a-4bad-b4aa-dbf4f7e800eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.675899146Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6632ccd-0a8a-4bad-b4aa-dbf4f7e800eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.676381550Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37c2fd09af8f0181d1ea3604c03ec79736d7ead318710eb250ce02f69b9a4c83,PodSandboxId:4aef18e039dfecea913bd72e9cb01f718a234bda8adf8d15cea528bf7b1e008f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726385845143134698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c401cb18134d904ba51c09fdae3067ebe5ea5f0b1f312f7eb9cea3ab923c02b1,PodSandboxId:5586a7fefd58b11d465b192fcaf4a9b4ded14fd2cda739bf04f03728e516c443,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726385811701507031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905abdc62484b035aca2d6076cbfdb8df7d038907df7d81f31bcf7d2aa25f587,PodSandboxId:6cd3f10857e40eb3f7b0a238b8d6bf26b4cb63f73f86169a6f248c4fbcfc7b0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726385811718025162,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0d3de8-3336-4820-b100-436f1
8b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f01d79f94bbd11d2c227050432b502f9528822bb531053e7c84dcff22037b6,PodSandboxId:47be1c612fc77bf63cfed388d59ec387c4bb60d4868c4420f8bb9b5c6852e64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726385811548331382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8350dbed2d0e1e1449148904a84fa5b148acd2edc069a29cf5e44d0d79d3193,PodSandboxId:f8183a3783fad4f63edb442e50c0a975dc478e5f5670ddfb99ae1a269834cc3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726385811469377886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b-feebd1f83d34,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db0b1951a78896466f2770710416685f0959d2509e4844f54eed621bf61c914,PodSandboxId:477e40c71a816a727cfd80b4c5cae7961dbfc025b9b8e5250340b348cfdff29d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726385806641623189,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e199d57146177ff6ced3b5d5b3f612263313b1500a411a40e7342f53e3239d98,PodSandboxId:2535da0e5974604ded97098bbf7f68538f8d7e6e28159b0d421759f577654568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726385806600485267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c69b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07512bfcb800f20ed88f464a40a8bc7847f85a444f8bf9ac3c6efa4936c7cee1,PodSandboxId:e521cda9505b9f96578f12e044aa7bad94754a006d4b8752a7389d39f406d3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726385806606148796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e470f2131890f0fd11a999a522fa58b7343b5b1eaf5220eb071c462ca05ba3de,PodSandboxId:0f7c19e1f862abeecab0b049f2c092908606f0a6afa3fc1698353623e8da72c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726385806544572893,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7dfe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55950a0433ba081fbfb4cb4a3ac6a434688f2ce4d46c43b8cafaa70bbbdf15b6,PodSandboxId:3bf86a80db60ba44a63d8b82f6fa328c55f380885bfeb3a5ebfbb91c0b00176b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726385792958007980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb42e6614b1dc34434df7c6ef272ae815c4e82a1d1a3336d5f2ad81860e364,PodSandboxId:31fa412bfc060d26df0e26abdab1f36377f3d1eb7409726fcf7e0029d5f9b1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726385475816744582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a92f5779f5c6326bc828958742aa89ea6fa0a05017c60e8542b49f27e59f2473,PodSandboxId:fa08f2e1ecce819a899d69e83bdde2cdd942c474b3b6f2ccf6671b180bf6d49b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726385418364291532,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cc3a66166ca6b1b2346612cbb83130236bdf87c321fa7d3873f51361bd2a01,PodSandboxId:1fe92302cde426d3ab2b0fa0ed0d76907b9f0e8ad6e6ee5270c5423823417c29,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726385406483713292,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bb0d3de8-3336-4820-b100-436f18b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e0b614cde44fb2193d901af66ec3799ef5a9d4f531ee045ac8fccaea9d7ce6,PodSandboxId:e8bd420d8e45e36db607f0a49fb9735ddd7f9b648788639ebee39da47a8f9761,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726385406269655972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b
-feebd1f83d34,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672943905b036b90e6fbbc428e9e8f99b446eca500d27bcb27118d86dc069e9e,PodSandboxId:3fe28e7fa0bc1593a88b75a1ca0aab3fc9a8510289dd8cf499233921d76b541d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726385395508565004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7d
fe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd304bb04be08dee1ece6b9daec0e0a3b587079dfff4e56cfbaf506e8083f3fe,PodSandboxId:4051b763c60f6f8efe20854c7fd2da62d852f5434ce440f88cd7ba8c8082cba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726385395501235325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c6
9b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fe08f547568769f96d52df19d48c5bc2f22baa9173c1b69bce841801609df7,PodSandboxId:1706a91c6cc99bda49accd3428e7f61a966e864beac7f9fd296fc6e5201d53e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726385395506397104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a551c82457450de7c1c877a7fb4eb1c93fe650148b1fd23b591df91f2ebaa0,PodSandboxId:b9d703577515d64b5fc6ca9667cf8407a1253f866e24decda285378f0016a62c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726385395433160552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c6632ccd-0a8a-4bad-b4aa-dbf4f7e800eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.721996051Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f329afe-3ba2-44a5-a2f1-a858d40a1575 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.722069749Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f329afe-3ba2-44a5-a2f1-a858d40a1575 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.723075138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c7e20f8-5667-44b5-9f25-86e112fceb98 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.723712405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386054723688705,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c7e20f8-5667-44b5-9f25-86e112fceb98 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.724269569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6dc4311c-4465-4180-9ebb-b63b75444c9e name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.724343451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6dc4311c-4465-4180-9ebb-b63b75444c9e name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:40:54 multinode-127008 crio[2818]: time="2024-09-15 07:40:54.724702457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37c2fd09af8f0181d1ea3604c03ec79736d7ead318710eb250ce02f69b9a4c83,PodSandboxId:4aef18e039dfecea913bd72e9cb01f718a234bda8adf8d15cea528bf7b1e008f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726385845143134698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c401cb18134d904ba51c09fdae3067ebe5ea5f0b1f312f7eb9cea3ab923c02b1,PodSandboxId:5586a7fefd58b11d465b192fcaf4a9b4ded14fd2cda739bf04f03728e516c443,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726385811701507031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905abdc62484b035aca2d6076cbfdb8df7d038907df7d81f31bcf7d2aa25f587,PodSandboxId:6cd3f10857e40eb3f7b0a238b8d6bf26b4cb63f73f86169a6f248c4fbcfc7b0a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726385811718025162,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0d3de8-3336-4820-b100-436f1
8b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f01d79f94bbd11d2c227050432b502f9528822bb531053e7c84dcff22037b6,PodSandboxId:47be1c612fc77bf63cfed388d59ec387c4bb60d4868c4420f8bb9b5c6852e64f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726385811548331382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8350dbed2d0e1e1449148904a84fa5b148acd2edc069a29cf5e44d0d79d3193,PodSandboxId:f8183a3783fad4f63edb442e50c0a975dc478e5f5670ddfb99ae1a269834cc3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726385811469377886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b-feebd1f83d34,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db0b1951a78896466f2770710416685f0959d2509e4844f54eed621bf61c914,PodSandboxId:477e40c71a816a727cfd80b4c5cae7961dbfc025b9b8e5250340b348cfdff29d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726385806641623189,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e199d57146177ff6ced3b5d5b3f612263313b1500a411a40e7342f53e3239d98,PodSandboxId:2535da0e5974604ded97098bbf7f68538f8d7e6e28159b0d421759f577654568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726385806600485267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c69b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07512bfcb800f20ed88f464a40a8bc7847f85a444f8bf9ac3c6efa4936c7cee1,PodSandboxId:e521cda9505b9f96578f12e044aa7bad94754a006d4b8752a7389d39f406d3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726385806606148796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e470f2131890f0fd11a999a522fa58b7343b5b1eaf5220eb071c462ca05ba3de,PodSandboxId:0f7c19e1f862abeecab0b049f2c092908606f0a6afa3fc1698353623e8da72c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726385806544572893,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7dfe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55950a0433ba081fbfb4cb4a3ac6a434688f2ce4d46c43b8cafaa70bbbdf15b6,PodSandboxId:3bf86a80db60ba44a63d8b82f6fa328c55f380885bfeb3a5ebfbb91c0b00176b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726385792958007980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q9c49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d81ba1a-2068-472c-ad61-31bb95fa15c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb42e6614b1dc34434df7c6ef272ae815c4e82a1d1a3336d5f2ad81860e364,PodSandboxId:31fa412bfc060d26df0e26abdab1f36377f3d1eb7409726fcf7e0029d5f9b1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726385475816744582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-zzxt7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0efb9514-adf1-47c0-88b6-6a2cc864f5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a92f5779f5c6326bc828958742aa89ea6fa0a05017c60e8542b49f27e59f2473,PodSandboxId:fa08f2e1ecce819a899d69e83bdde2cdd942c474b3b6f2ccf6671b180bf6d49b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726385418364291532,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d96e6d53-fc72-4e99-9472-374a5a0ca92e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cc3a66166ca6b1b2346612cbb83130236bdf87c321fa7d3873f51361bd2a01,PodSandboxId:1fe92302cde426d3ab2b0fa0ed0d76907b9f0e8ad6e6ee5270c5423823417c29,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726385406483713292,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jxp4h,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: bb0d3de8-3336-4820-b100-436f18b71976,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e0b614cde44fb2193d901af66ec3799ef5a9d4f531ee045ac8fccaea9d7ce6,PodSandboxId:e8bd420d8e45e36db607f0a49fb9735ddd7f9b648788639ebee39da47a8f9761,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726385406269655972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-57hqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72582fe-88ed-40b5-b13b
-feebd1f83d34,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:672943905b036b90e6fbbc428e9e8f99b446eca500d27bcb27118d86dc069e9e,PodSandboxId:3fe28e7fa0bc1593a88b75a1ca0aab3fc9a8510289dd8cf499233921d76b541d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726385395508565004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5142f7d
fe2b1054017e1481fe790ad09,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd304bb04be08dee1ece6b9daec0e0a3b587079dfff4e56cfbaf506e8083f3fe,PodSandboxId:4051b763c60f6f8efe20854c7fd2da62d852f5434ce440f88cd7ba8c8082cba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726385395501235325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccda2fa119113ff2f2a0c6
9b57343842,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fe08f547568769f96d52df19d48c5bc2f22baa9173c1b69bce841801609df7,PodSandboxId:1706a91c6cc99bda49accd3428e7f61a966e864beac7f9fd296fc6e5201d53e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726385395506397104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 310e1fd971ec50ca002b32392fdd948f,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a551c82457450de7c1c877a7fb4eb1c93fe650148b1fd23b591df91f2ebaa0,PodSandboxId:b9d703577515d64b5fc6ca9667cf8407a1253f866e24decda285378f0016a62c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726385395433160552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-127008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0642b09dcf6b045abb0cdfb6f7dc866d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6dc4311c-4465-4180-9ebb-b63b75444c9e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	37c2fd09af8f0       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   4aef18e039dfe       busybox-7dff88458-zzxt7
	905abdc62484b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   6cd3f10857e40       kindnet-jxp4h
	c401cb18134d9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   2                   5586a7fefd58b       coredns-7c65d6cfc9-q9c49
	77f01d79f94bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   47be1c612fc77       storage-provisioner
	d8350dbed2d0e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   f8183a3783fad       kube-proxy-57hqd
	0db0b1951a788       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   477e40c71a816       kube-apiserver-multinode-127008
	07512bfcb800f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   e521cda9505b9       etcd-multinode-127008
	e199d57146177       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   2535da0e59746       kube-scheduler-multinode-127008
	e470f2131890f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   0f7c19e1f862a       kube-controller-manager-multinode-127008
	55950a0433ba0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Exited              coredns                   1                   3bf86a80db60b       coredns-7c65d6cfc9-q9c49
	7deb42e6614b1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   31fa412bfc060       busybox-7dff88458-zzxt7
	a92f5779f5c63       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   fa08f2e1ecce8       storage-provisioner
	55cc3a66166ca       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   1fe92302cde42       kindnet-jxp4h
	63e0b614cde44       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   e8bd420d8e45e       kube-proxy-57hqd
	672943905b036       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   3fe28e7fa0bc1       kube-controller-manager-multinode-127008
	80fe08f547568       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   1706a91c6cc99       etcd-multinode-127008
	fd304bb04be08       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   4051b763c60f6       kube-scheduler-multinode-127008
	39a551c824574       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   b9d703577515d       kube-apiserver-multinode-127008
	
	
	==> coredns [55950a0433ba081fbfb4cb4a3ac6a434688f2ce4d46c43b8cafaa70bbbdf15b6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43639 - 51318 "HINFO IN 6237801041562186729.5309758183064330623. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013815683s
	
	
	==> coredns [c401cb18134d904ba51c09fdae3067ebe5ea5f0b1f312f7eb9cea3ab923c02b1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58976 - 62626 "HINFO IN 4417880172304727251.3385481419910881983. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015929217s
	
	
	==> describe nodes <==
	Name:               multinode-127008
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-127008
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=multinode-127008
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T07_30_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:29:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-127008
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:40:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:36:50 +0000   Sun, 15 Sep 2024 07:29:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:36:50 +0000   Sun, 15 Sep 2024 07:29:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:36:50 +0000   Sun, 15 Sep 2024 07:29:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:36:50 +0000   Sun, 15 Sep 2024 07:30:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    multinode-127008
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6ebf40788f64241adea960784298779
	  System UUID:                c6ebf407-88f6-4241-adea-960784298779
	  Boot ID:                    1c986149-b3d5-42c8-a740-7cb144f5b0b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zzxt7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	  kube-system                 coredns-7c65d6cfc9-q9c49                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-127008                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-jxp4h                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-127008             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-127008    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-57hqd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-127008             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 10m                   kube-proxy       
	  Normal  Starting                 4m3s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m                   kubelet          Node multinode-127008 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                   kubelet          Node multinode-127008 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                   kubelet          Node multinode-127008 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                   node-controller  Node multinode-127008 event: Registered Node multinode-127008 in Controller
	  Normal  NodeReady                10m                   kubelet          Node multinode-127008 status is now: NodeReady
	  Normal  Starting                 4m10s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m10s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m10s)  kubelet          Node multinode-127008 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m10s)  kubelet          Node multinode-127008 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m10s)  kubelet          Node multinode-127008 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m2s                  node-controller  Node multinode-127008 event: Registered Node multinode-127008 in Controller
	
	
	Name:               multinode-127008-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-127008-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=multinode-127008
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_37_30_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:37:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-127008-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:38:30 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 15 Sep 2024 07:38:00 +0000   Sun, 15 Sep 2024 07:39:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 15 Sep 2024 07:38:00 +0000   Sun, 15 Sep 2024 07:39:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 15 Sep 2024 07:38:00 +0000   Sun, 15 Sep 2024 07:39:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 15 Sep 2024 07:38:00 +0000   Sun, 15 Sep 2024 07:39:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    multinode-127008-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 401031e125c841a58c988df979495fee
	  System UUID:                401031e1-25c8-41a5-8c98-8df979495fee
	  Boot ID:                    c06de3e8-aa53-45c7-b3f2-db5a8c15b6cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-96v48    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m30s
	  kube-system                 kindnet-xvllr              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-q96bk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-127008-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-127008-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-127008-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                9m46s                  kubelet          Node multinode-127008-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  3m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m26s)  kubelet          Node multinode-127008-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m26s)  kubelet          Node multinode-127008-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m26s)  kubelet          Node multinode-127008-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-127008-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-127008-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.045927] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.187919] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.109009] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.283580] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.924438] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +4.080489] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.057265] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.986358] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[  +0.085839] kauditd_printk_skb: 69 callbacks suppressed
	[Sep15 07:30] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.127740] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.725526] kauditd_printk_skb: 60 callbacks suppressed
	[Sep15 07:31] kauditd_printk_skb: 14 callbacks suppressed
	[Sep15 07:36] systemd-fstab-generator[2742]: Ignoring "noauto" option for root device
	[  +0.150330] systemd-fstab-generator[2755]: Ignoring "noauto" option for root device
	[  +0.176673] systemd-fstab-generator[2769]: Ignoring "noauto" option for root device
	[  +0.136174] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +0.282199] systemd-fstab-generator[2810]: Ignoring "noauto" option for root device
	[  +9.745607] systemd-fstab-generator[2925]: Ignoring "noauto" option for root device
	[  +0.082633] kauditd_printk_skb: 110 callbacks suppressed
	[  +1.841229] systemd-fstab-generator[3045]: Ignoring "noauto" option for root device
	[  +5.740788] kauditd_printk_skb: 76 callbacks suppressed
	[Sep15 07:37] systemd-fstab-generator[3891]: Ignoring "noauto" option for root device
	[  +0.096539] kauditd_printk_skb: 36 callbacks suppressed
	[ +19.497788] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [07512bfcb800f20ed88f464a40a8bc7847f85a444f8bf9ac3c6efa4936c7cee1] <==
	{"level":"info","ts":"2024-09-15T07:36:46.941104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 switched to configuration voters=(9516406204709898018)"}
	{"level":"info","ts":"2024-09-15T07:36:46.942746Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"73137fd659599d","local-member-id":"84111105ea0e8722","added-peer-id":"84111105ea0e8722","added-peer-peer-urls":["https://192.168.39.241:2380"]}
	{"level":"info","ts":"2024-09-15T07:36:46.942890Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"73137fd659599d","local-member-id":"84111105ea0e8722","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T07:36:46.942939Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T07:36:46.945840Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-15T07:36:46.946067Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"84111105ea0e8722","initial-advertise-peer-urls":["https://192.168.39.241:2380"],"listen-peer-urls":["https://192.168.39.241:2380"],"advertise-client-urls":["https://192.168.39.241:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.241:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-15T07:36:46.946115Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-15T07:36:46.948389Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.241:2380"}
	{"level":"info","ts":"2024-09-15T07:36:46.948421Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.241:2380"}
	{"level":"info","ts":"2024-09-15T07:36:48.711584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-15T07:36:48.711659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-15T07:36:48.711690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 received MsgPreVoteResp from 84111105ea0e8722 at term 2"}
	{"level":"info","ts":"2024-09-15T07:36:48.711706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 became candidate at term 3"}
	{"level":"info","ts":"2024-09-15T07:36:48.711711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 received MsgVoteResp from 84111105ea0e8722 at term 3"}
	{"level":"info","ts":"2024-09-15T07:36:48.711719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 became leader at term 3"}
	{"level":"info","ts":"2024-09-15T07:36:48.711726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 84111105ea0e8722 elected leader 84111105ea0e8722 at term 3"}
	{"level":"info","ts":"2024-09-15T07:36:48.715032Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T07:36:48.716029Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T07:36:48.714981Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"84111105ea0e8722","local-member-attributes":"{Name:multinode-127008 ClientURLs:[https://192.168.39.241:2379]}","request-path":"/0/members/84111105ea0e8722/attributes","cluster-id":"73137fd659599d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T07:36:48.716610Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T07:36:48.716830Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T07:36:48.716844Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T07:36:48.716966Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.241:2379"}
	{"level":"info","ts":"2024-09-15T07:36:48.717470Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T07:36:48.718267Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [80fe08f547568769f96d52df19d48c5bc2f22baa9173c1b69bce841801609df7] <==
	{"level":"info","ts":"2024-09-15T07:29:56.622272Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T07:29:56.623071Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T07:29:56.627592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.241:2379"}
	{"level":"info","ts":"2024-09-15T07:29:56.628375Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-15T07:30:49.133941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.379106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-127008-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T07:30:49.134303Z","caller":"traceutil/trace.go:171","msg":"trace[459317188] range","detail":"{range_begin:/registry/minions/multinode-127008-m02; range_end:; response_count:0; response_revision:437; }","duration":"132.796039ms","start":"2024-09-15T07:30:49.001490Z","end":"2024-09-15T07:30:49.134286Z","steps":["trace[459317188] 'range keys from in-memory index tree'  (duration: 132.227089ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T07:30:49.134142Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.692155ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T07:30:49.134991Z","caller":"traceutil/trace.go:171","msg":"trace[1520228515] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:437; }","duration":"202.555114ms","start":"2024-09-15T07:30:48.932425Z","end":"2024-09-15T07:30:49.134980Z","steps":["trace[1520228515] 'range keys from in-memory index tree'  (duration: 201.686275ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:31:47.189638Z","caller":"traceutil/trace.go:171","msg":"trace[1231691396] linearizableReadLoop","detail":"{readStateIndex:608; appliedIndex:607; }","duration":"213.965993ms","start":"2024-09-15T07:31:46.975646Z","end":"2024-09-15T07:31:47.189612Z","steps":["trace[1231691396] 'read index received'  (duration: 28.446598ms)","trace[1231691396] 'applied index is now lower than readState.Index'  (duration: 185.518853ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T07:31:47.189999Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.330151ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-127008-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T07:31:47.190336Z","caller":"traceutil/trace.go:171","msg":"trace[369651290] range","detail":"{range_begin:/registry/minions/multinode-127008-m03; range_end:; response_count:0; response_revision:576; }","duration":"214.700516ms","start":"2024-09-15T07:31:46.975626Z","end":"2024-09-15T07:31:47.190327Z","steps":["trace[369651290] 'agreement among raft nodes before linearized reading'  (duration: 214.279624ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:31:47.190053Z","caller":"traceutil/trace.go:171","msg":"trace[467158830] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"236.331479ms","start":"2024-09-15T07:31:46.953707Z","end":"2024-09-15T07:31:47.190039Z","steps":["trace[467158830] 'process raft request'  (duration: 175.001843ms)","trace[467158830] 'compare'  (duration: 60.777874ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T07:31:47.190291Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.676262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T07:31:47.191911Z","caller":"traceutil/trace.go:171","msg":"trace[1858409489] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:576; }","duration":"179.291027ms","start":"2024-09-15T07:31:47.012601Z","end":"2024-09-15T07:31:47.191892Z","steps":["trace[1858409489] 'agreement among raft nodes before linearized reading'  (duration: 177.656652ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:31:53.246331Z","caller":"traceutil/trace.go:171","msg":"trace[1745847829] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"213.154006ms","start":"2024-09-15T07:31:53.033163Z","end":"2024-09-15T07:31:53.246317Z","steps":["trace[1745847829] 'process raft request'  (duration: 212.992867ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:35:01.833260Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-15T07:35:01.833406Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-127008","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.241:2380"],"advertise-client-urls":["https://192.168.39.241:2379"]}
	{"level":"warn","ts":"2024-09-15T07:35:01.833581Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T07:35:01.833738Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T07:35:01.881472Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.241:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T07:35:01.881537Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.241:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-15T07:35:01.884355Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"84111105ea0e8722","current-leader-member-id":"84111105ea0e8722"}
	{"level":"info","ts":"2024-09-15T07:35:01.888283Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.241:2380"}
	{"level":"info","ts":"2024-09-15T07:35:01.888521Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.241:2380"}
	{"level":"info","ts":"2024-09-15T07:35:01.888560Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-127008","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.241:2380"],"advertise-client-urls":["https://192.168.39.241:2379"]}
	
	
	==> kernel <==
	 07:40:55 up 11 min,  0 users,  load average: 0.09, 0.14, 0.09
	Linux multinode-127008 5.10.207 #1 SMP Sun Sep 15 04:48:27 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [55cc3a66166ca6b1b2346612cbb83130236bdf87c321fa7d3873f51361bd2a01] <==
	I0915 07:34:17.466714       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.3.0/24] 
	I0915 07:34:27.473931       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:34:27.474275       1 main.go:299] handling current node
	I0915 07:34:27.474350       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:34:27.474377       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:34:27.474582       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0915 07:34:27.474604       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.3.0/24] 
	I0915 07:34:37.466510       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:34:37.466568       1 main.go:299] handling current node
	I0915 07:34:37.466585       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:34:37.466591       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:34:37.466737       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0915 07:34:37.466760       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.3.0/24] 
	I0915 07:34:47.474018       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:34:47.474121       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:34:47.474302       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0915 07:34:47.474333       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.3.0/24] 
	I0915 07:34:47.474407       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:34:47.474428       1 main.go:299] handling current node
	I0915 07:34:57.465698       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:34:57.465745       1 main.go:299] handling current node
	I0915 07:34:57.465759       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:34:57.465764       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:34:57.465899       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0915 07:34:57.465924       1 main.go:322] Node multinode-127008-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [905abdc62484b035aca2d6076cbfdb8df7d038907df7d81f31bcf7d2aa25f587] <==
	I0915 07:39:52.669450       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:40:02.669254       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:40:02.670032       1 main.go:299] handling current node
	I0915 07:40:02.670114       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:40:02.670126       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:40:12.669460       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:40:12.669557       1 main.go:299] handling current node
	I0915 07:40:12.669577       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:40:12.669583       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:40:22.675648       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:40:22.675950       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:40:22.676271       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:40:22.676306       1 main.go:299] handling current node
	I0915 07:40:32.675854       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:40:32.675991       1 main.go:299] handling current node
	I0915 07:40:32.676040       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:40:32.676060       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:40:42.677610       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:40:42.677652       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	I0915 07:40:42.677777       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:40:42.677800       1 main.go:299] handling current node
	I0915 07:40:52.669730       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0915 07:40:52.669821       1 main.go:299] handling current node
	I0915 07:40:52.669853       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0915 07:40:52.669871       1 main.go:322] Node multinode-127008-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0db0b1951a78896466f2770710416685f0959d2509e4844f54eed621bf61c914] <==
	I0915 07:36:50.164917       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0915 07:36:50.164943       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0915 07:36:50.164953       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0915 07:36:50.167502       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0915 07:36:50.170311       1 shared_informer.go:320] Caches are synced for configmaps
	I0915 07:36:50.170537       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0915 07:36:50.171237       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0915 07:36:50.177255       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0915 07:36:50.177366       1 aggregator.go:171] initial CRD sync complete...
	I0915 07:36:50.177395       1 autoregister_controller.go:144] Starting autoregister controller
	I0915 07:36:50.177422       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0915 07:36:50.177444       1 cache.go:39] Caches are synced for autoregister controller
	E0915 07:36:50.179523       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0915 07:36:50.194249       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0915 07:36:50.206993       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0915 07:36:50.207071       1 policy_source.go:224] refreshing policies
	I0915 07:36:50.212586       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 07:36:50.977068       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0915 07:36:52.309490       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0915 07:36:52.454733       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0915 07:36:52.478874       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0915 07:36:52.585063       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0915 07:36:52.593887       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0915 07:36:53.578734       1 controller.go:615] quota admission added evaluator for: endpoints
	I0915 07:36:53.723515       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [39a551c82457450de7c1c877a7fb4eb1c93fe650148b1fd23b591df91f2ebaa0] <==
	W0915 07:35:01.861523       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.861564       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.861602       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.861638       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.861877       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.861953       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.861983       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.862018       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.862063       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.862102       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.862138       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.864991       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865028       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865058       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865101       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865146       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865682       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865718       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865750       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865781       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865817       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.865923       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.866061       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.866097       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0915 07:35:01.866232       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [672943905b036b90e6fbbc428e9e8f99b446eca500d27bcb27118d86dc069e9e] <==
	I0915 07:32:35.573393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:35.573469       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127008-m02"
	I0915 07:32:36.617454       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-127008-m03\" does not exist"
	I0915 07:32:36.617877       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127008-m02"
	I0915 07:32:36.636792       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-127008-m03" podCIDRs=["10.244.3.0/24"]
	I0915 07:32:36.636898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:36.636930       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:36.640673       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:37.114534       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:37.438467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:39.806798       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:46.853770       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:56.368855       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:56.369334       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127008-m02"
	I0915 07:32:56.380175       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:32:59.763389       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:33:34.779797       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m02"
	I0915 07:33:34.780864       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127008-m03"
	I0915 07:33:34.802059       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m02"
	I0915 07:33:34.807273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.496352ms"
	I0915 07:33:34.807738       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.976µs"
	I0915 07:33:39.834412       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:33:39.853976       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:33:39.882412       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m02"
	I0915 07:33:49.957372       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	
	
	==> kube-controller-manager [e470f2131890f0fd11a999a522fa58b7343b5b1eaf5220eb071c462ca05ba3de] <==
	I0915 07:38:10.004451       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-127008-m03" podCIDRs=["10.244.2.0/24"]
	I0915 07:38:10.004501       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:10.004805       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:10.015556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:10.267633       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:10.599889       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:13.777308       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:20.201381       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:28.547100       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:28.547485       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127008-m02"
	I0915 07:38:28.560483       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:28.694427       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:33.280955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:33.300627       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:33.775145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m03"
	I0915 07:38:33.775273       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127008-m02"
	I0915 07:39:13.717567       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m02"
	I0915 07:39:13.741834       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m02"
	I0915 07:39:13.752604       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.507399ms"
	I0915 07:39:13.753944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.709µs"
	I0915 07:39:18.796331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-127008-m02"
	I0915 07:39:33.605098       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-d2r9v"
	I0915 07:39:33.633909       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-d2r9v"
	I0915 07:39:33.634864       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-lsd2q"
	I0915 07:39:33.656332       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-lsd2q"
	
	
	==> kube-proxy [63e0b614cde44fb2193d901af66ec3799ef5a9d4f531ee045ac8fccaea9d7ce6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 07:30:06.730652       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 07:30:06.745648       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.241"]
	E0915 07:30:06.745763       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 07:30:06.792645       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 07:30:06.792730       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 07:30:06.792767       1 server_linux.go:169] "Using iptables Proxier"
	I0915 07:30:06.795458       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 07:30:06.795759       1 server.go:483] "Version info" version="v1.31.1"
	I0915 07:30:06.795801       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:30:06.797662       1 config.go:199] "Starting service config controller"
	I0915 07:30:06.797719       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 07:30:06.797754       1 config.go:105] "Starting endpoint slice config controller"
	I0915 07:30:06.797769       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 07:30:06.798410       1 config.go:328] "Starting node config controller"
	I0915 07:30:06.798457       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 07:30:06.898351       1 shared_informer.go:320] Caches are synced for service config
	I0915 07:30:06.898444       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 07:30:06.898548       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d8350dbed2d0e1e1449148904a84fa5b148acd2edc069a29cf5e44d0d79d3193] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 07:36:51.912337       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 07:36:51.935128       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.241"]
	E0915 07:36:51.935274       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 07:36:52.004315       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 07:36:52.004372       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 07:36:52.004397       1 server_linux.go:169] "Using iptables Proxier"
	I0915 07:36:52.010730       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 07:36:52.011012       1 server.go:483] "Version info" version="v1.31.1"
	I0915 07:36:52.011027       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:36:52.016772       1 config.go:199] "Starting service config controller"
	I0915 07:36:52.016823       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 07:36:52.016861       1 config.go:105] "Starting endpoint slice config controller"
	I0915 07:36:52.016865       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 07:36:52.017491       1 config.go:328] "Starting node config controller"
	I0915 07:36:52.017520       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 07:36:52.117523       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 07:36:52.117584       1 shared_informer.go:320] Caches are synced for service config
	I0915 07:36:52.117834       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e199d57146177ff6ced3b5d5b3f612263313b1500a411a40e7342f53e3239d98] <==
	I0915 07:36:47.660148       1 serving.go:386] Generated self-signed cert in-memory
	W0915 07:36:50.006358       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0915 07:36:50.006449       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0915 07:36:50.006475       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0915 07:36:50.006487       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0915 07:36:50.121569       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0915 07:36:50.121668       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:36:50.135611       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0915 07:36:50.136133       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 07:36:50.138253       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 07:36:50.138344       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 07:36:50.239267       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fd304bb04be08dee1ece6b9daec0e0a3b587079dfff4e56cfbaf506e8083f3fe] <==
	E0915 07:29:58.089983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:58.090045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 07:29:58.090083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:58.906256       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 07:29:58.906289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:58.912246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 07:29:58.912329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:58.950085       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 07:29:58.950217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:59.056844       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 07:29:59.057167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:59.076573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 07:29:59.077335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:59.084278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 07:29:59.084327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:59.106687       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 07:29:59.106745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:59.134484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 07:29:59.134534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:59.177165       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 07:29:59.177253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 07:29:59.251824       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 07:29:59.251875       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0915 07:30:01.983239       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0915 07:35:01.842952       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 15 07:39:35 multinode-127008 kubelet[3052]: E0915 07:39:35.988824    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385975986849000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:39:45 multinode-127008 kubelet[3052]: E0915 07:39:45.944688    3052 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 15 07:39:45 multinode-127008 kubelet[3052]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 15 07:39:45 multinode-127008 kubelet[3052]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 15 07:39:45 multinode-127008 kubelet[3052]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 15 07:39:45 multinode-127008 kubelet[3052]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 15 07:39:45 multinode-127008 kubelet[3052]: E0915 07:39:45.993058    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385985992570742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:39:45 multinode-127008 kubelet[3052]: E0915 07:39:45.993109    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385985992570742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:39:55 multinode-127008 kubelet[3052]: E0915 07:39:55.994469    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385995994098995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:39:55 multinode-127008 kubelet[3052]: E0915 07:39:55.994513    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726385995994098995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:40:05 multinode-127008 kubelet[3052]: E0915 07:40:05.995772    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386005995488316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:40:05 multinode-127008 kubelet[3052]: E0915 07:40:05.995820    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386005995488316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:40:15 multinode-127008 kubelet[3052]: E0915 07:40:15.999056    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386015997989545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:40:15 multinode-127008 kubelet[3052]: E0915 07:40:15.999119    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386015997989545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:40:26 multinode-127008 kubelet[3052]: E0915 07:40:26.000828    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386026000426096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:40:26 multinode-127008 kubelet[3052]: E0915 07:40:26.000859    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386026000426096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:40:36 multinode-127008 kubelet[3052]: E0915 07:40:36.002764    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386036002359331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:40:36 multinode-127008 kubelet[3052]: E0915 07:40:36.002791    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386036002359331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:40:45 multinode-127008 kubelet[3052]: E0915 07:40:45.945905    3052 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 15 07:40:45 multinode-127008 kubelet[3052]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 15 07:40:45 multinode-127008 kubelet[3052]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 15 07:40:45 multinode-127008 kubelet[3052]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 15 07:40:45 multinode-127008 kubelet[3052]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 15 07:40:46 multinode-127008 kubelet[3052]: E0915 07:40:46.004542    3052 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386046004262465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:40:46 multinode-127008 kubelet[3052]: E0915 07:40:46.004594    3052 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386046004262465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:40:54.296204   47088 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19644-6166/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-127008 -n multinode-127008
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-127008 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.20s)

                                                
                                    
x
+
TestPreload (277.94s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-514007 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0915 07:46:02.684289   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-514007 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m14.688225697s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-514007 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-514007 image pull gcr.io/k8s-minikube/busybox: (3.311632039s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-514007
E0915 07:47:56.198918   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-514007: exit status 82 (2m0.427764127s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-514007"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-514007 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-09-15 07:49:06.600484276 +0000 UTC m=+4765.548757863
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-514007 -n test-preload-514007
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-514007 -n test-preload-514007: exit status 3 (18.604957122s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:49:25.202115   49965 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	E0915 07:49:25.202131   49965 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-514007" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-514007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-514007
--- FAIL: TestPreload (277.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (1224.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-669362 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0915 07:52:56.196627   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-669362 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m37.342000089s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-669362] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-669362" primary control-plane node in "kubernetes-upgrade-669362" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:52:54.014286   52215 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:52:54.014557   52215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:52:54.014568   52215 out.go:358] Setting ErrFile to fd 2...
	I0915 07:52:54.014572   52215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:52:54.014787   52215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:52:54.015381   52215 out.go:352] Setting JSON to false
	I0915 07:52:54.016357   52215 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5720,"bootTime":1726381054,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 07:52:54.016442   52215 start.go:139] virtualization: kvm guest
	I0915 07:52:54.018709   52215 out.go:177] * [kubernetes-upgrade-669362] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 07:52:54.019953   52215 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:52:54.019954   52215 notify.go:220] Checking for updates...
	I0915 07:52:54.021316   52215 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:52:54.022883   52215 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:52:54.024358   52215 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:52:54.025686   52215 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 07:52:54.026983   52215 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:52:54.028772   52215 config.go:182] Loaded profile config "cert-expiration-773617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:52:54.028894   52215 config.go:182] Loaded profile config "offline-crio-727172": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:52:54.029002   52215 config.go:182] Loaded profile config "pause-742219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:52:54.029119   52215 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:52:54.064137   52215 out.go:177] * Using the kvm2 driver based on user configuration
	I0915 07:52:54.065489   52215 start.go:297] selected driver: kvm2
	I0915 07:52:54.065502   52215 start.go:901] validating driver "kvm2" against <nil>
	I0915 07:52:54.065513   52215 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:52:54.066257   52215 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:52:54.066320   52215 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 07:52:54.081268   52215 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 07:52:54.081309   52215 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 07:52:54.081543   52215 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 07:52:54.081567   52215 cni.go:84] Creating CNI manager for ""
	I0915 07:52:54.081605   52215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 07:52:54.081614   52215 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 07:52:54.081662   52215 start.go:340] cluster config:
	{Name:kubernetes-upgrade-669362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-669362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:52:54.081747   52215 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:52:54.083460   52215 out.go:177] * Starting "kubernetes-upgrade-669362" primary control-plane node in "kubernetes-upgrade-669362" cluster
	I0915 07:52:54.084720   52215 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0915 07:52:54.084758   52215 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0915 07:52:54.084767   52215 cache.go:56] Caching tarball of preloaded images
	I0915 07:52:54.084839   52215 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:52:54.084850   52215 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0915 07:52:54.084919   52215 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/config.json ...
	I0915 07:52:54.084943   52215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/config.json: {Name:mk5351c7fe24d574e5bb4dc309f6acc96064d0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:52:54.085064   52215 start.go:360] acquireMachinesLock for kubernetes-upgrade-669362: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:52:59.442870   52215 start.go:364] duration metric: took 5.35775606s to acquireMachinesLock for "kubernetes-upgrade-669362"
	I0915 07:52:59.442922   52215 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-669362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-669362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:52:59.443042   52215 start.go:125] createHost starting for "" (driver="kvm2")
	I0915 07:52:59.445323   52215 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 07:52:59.445541   52215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:52:59.445585   52215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:52:59.461516   52215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46159
	I0915 07:52:59.462032   52215 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:52:59.462675   52215 main.go:141] libmachine: Using API Version  1
	I0915 07:52:59.462700   52215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:52:59.463072   52215 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:52:59.463260   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetMachineName
	I0915 07:52:59.463415   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:52:59.463601   52215 start.go:159] libmachine.API.Create for "kubernetes-upgrade-669362" (driver="kvm2")
	I0915 07:52:59.463633   52215 client.go:168] LocalClient.Create starting
	I0915 07:52:59.463675   52215 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem
	I0915 07:52:59.463714   52215 main.go:141] libmachine: Decoding PEM data...
	I0915 07:52:59.463742   52215 main.go:141] libmachine: Parsing certificate...
	I0915 07:52:59.463808   52215 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem
	I0915 07:52:59.463830   52215 main.go:141] libmachine: Decoding PEM data...
	I0915 07:52:59.463842   52215 main.go:141] libmachine: Parsing certificate...
	I0915 07:52:59.463857   52215 main.go:141] libmachine: Running pre-create checks...
	I0915 07:52:59.463870   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .PreCreateCheck
	I0915 07:52:59.464191   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetConfigRaw
	I0915 07:52:59.464593   52215 main.go:141] libmachine: Creating machine...
	I0915 07:52:59.464613   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .Create
	I0915 07:52:59.464793   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Creating KVM machine...
	I0915 07:52:59.465950   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found existing default KVM network
	I0915 07:52:59.467023   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:52:59.466885   52281 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:45:7b:56} reservation:<nil>}
	I0915 07:52:59.467879   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:52:59.467808   52281 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:8f:54:9e} reservation:<nil>}
	I0915 07:52:59.470025   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:52:59.469933   52281 network.go:209] skipping subnet 192.168.61.0/24 that is reserved: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0915 07:52:59.471070   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:52:59.470990   52281 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:91:41:97} reservation:<nil>}
	I0915 07:52:59.472133   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:52:59.472052   52281 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015f90}
	I0915 07:52:59.472168   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | created network xml: 
	I0915 07:52:59.472191   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | <network>
	I0915 07:52:59.472215   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG |   <name>mk-kubernetes-upgrade-669362</name>
	I0915 07:52:59.472234   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG |   <dns enable='no'/>
	I0915 07:52:59.472246   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG |   
	I0915 07:52:59.472260   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0915 07:52:59.472272   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG |     <dhcp>
	I0915 07:52:59.472279   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0915 07:52:59.472287   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG |     </dhcp>
	I0915 07:52:59.472306   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG |   </ip>
	I0915 07:52:59.472314   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG |   
	I0915 07:52:59.472325   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | </network>
	I0915 07:52:59.472334   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | 
	I0915 07:52:59.478434   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | trying to create private KVM network mk-kubernetes-upgrade-669362 192.168.83.0/24...
	I0915 07:52:59.550849   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | private KVM network mk-kubernetes-upgrade-669362 192.168.83.0/24 created
	I0915 07:52:59.550881   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Setting up store path in /home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362 ...
	I0915 07:52:59.550895   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:52:59.550839   52281 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:52:59.550911   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Building disk image from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso
	I0915 07:52:59.550985   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Downloading /home/jenkins/minikube-integration/19644-6166/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso...
	I0915 07:52:59.794531   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:52:59.794407   52281 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362/id_rsa...
	I0915 07:52:59.868479   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:52:59.868337   52281 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362/kubernetes-upgrade-669362.rawdisk...
	I0915 07:52:59.868507   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | Writing magic tar header
	I0915 07:52:59.868524   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | Writing SSH key tar header
	I0915 07:52:59.868606   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:52:59.868525   52281 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362 ...
	I0915 07:52:59.868670   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362
	I0915 07:52:59.868693   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362 (perms=drwx------)
	I0915 07:52:59.868705   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube/machines
	I0915 07:52:59.868720   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:52:59.868733   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19644-6166
	I0915 07:52:59.868744   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0915 07:52:59.868752   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | Checking permissions on dir: /home/jenkins
	I0915 07:52:59.868762   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube/machines (perms=drwxr-xr-x)
	I0915 07:52:59.868775   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | Checking permissions on dir: /home
	I0915 07:52:59.868786   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166/.minikube (perms=drwxr-xr-x)
	I0915 07:52:59.868801   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Setting executable bit set on /home/jenkins/minikube-integration/19644-6166 (perms=drwxrwxr-x)
	I0915 07:52:59.868813   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0915 07:52:59.868823   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0915 07:52:59.868834   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Creating domain...
	I0915 07:52:59.868852   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | Skipping /home - not owner
	I0915 07:52:59.870253   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) define libvirt domain using xml: 
	I0915 07:52:59.870288   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) <domain type='kvm'>
	I0915 07:52:59.870308   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)   <name>kubernetes-upgrade-669362</name>
	I0915 07:52:59.870319   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)   <memory unit='MiB'>2200</memory>
	I0915 07:52:59.870346   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)   <vcpu>2</vcpu>
	I0915 07:52:59.870368   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)   <features>
	I0915 07:52:59.870380   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     <acpi/>
	I0915 07:52:59.870387   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     <apic/>
	I0915 07:52:59.870399   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     <pae/>
	I0915 07:52:59.870408   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     
	I0915 07:52:59.870425   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)   </features>
	I0915 07:52:59.870435   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)   <cpu mode='host-passthrough'>
	I0915 07:52:59.870443   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)   
	I0915 07:52:59.870448   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)   </cpu>
	I0915 07:52:59.870456   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)   <os>
	I0915 07:52:59.870465   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     <type>hvm</type>
	I0915 07:52:59.870478   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     <boot dev='cdrom'/>
	I0915 07:52:59.870488   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     <boot dev='hd'/>
	I0915 07:52:59.870500   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     <bootmenu enable='no'/>
	I0915 07:52:59.870509   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)   </os>
	I0915 07:52:59.870533   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)   <devices>
	I0915 07:52:59.870543   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     <disk type='file' device='cdrom'>
	I0915 07:52:59.870561   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362/boot2docker.iso'/>
	I0915 07:52:59.870575   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)       <target dev='hdc' bus='scsi'/>
	I0915 07:52:59.870583   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)       <readonly/>
	I0915 07:52:59.870589   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     </disk>
	I0915 07:52:59.870597   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     <disk type='file' device='disk'>
	I0915 07:52:59.870607   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0915 07:52:59.870627   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)       <source file='/home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362/kubernetes-upgrade-669362.rawdisk'/>
	I0915 07:52:59.870638   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)       <target dev='hda' bus='virtio'/>
	I0915 07:52:59.870645   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     </disk>
	I0915 07:52:59.870655   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     <interface type='network'>
	I0915 07:52:59.870664   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)       <source network='mk-kubernetes-upgrade-669362'/>
	I0915 07:52:59.870672   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)       <model type='virtio'/>
	I0915 07:52:59.870680   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     </interface>
	I0915 07:52:59.870689   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     <interface type='network'>
	I0915 07:52:59.870698   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)       <source network='default'/>
	I0915 07:52:59.870707   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)       <model type='virtio'/>
	I0915 07:52:59.870715   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     </interface>
	I0915 07:52:59.870725   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     <serial type='pty'>
	I0915 07:52:59.870734   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)       <target port='0'/>
	I0915 07:52:59.870743   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     </serial>
	I0915 07:52:59.870751   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     <console type='pty'>
	I0915 07:52:59.870762   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)       <target type='serial' port='0'/>
	I0915 07:52:59.870773   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     </console>
	I0915 07:52:59.870782   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     <rng model='virtio'>
	I0915 07:52:59.870789   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)       <backend model='random'>/dev/random</backend>
	I0915 07:52:59.870795   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     </rng>
	I0915 07:52:59.870801   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     
	I0915 07:52:59.870807   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)     
	I0915 07:52:59.870812   52215 main.go:141] libmachine: (kubernetes-upgrade-669362)   </devices>
	I0915 07:52:59.870816   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) </domain>
	I0915 07:52:59.870823   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) 
	I0915 07:52:59.876139   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:96:59:24 in network default
	I0915 07:52:59.876789   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Ensuring networks are active...
	I0915 07:52:59.876815   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:52:59.877547   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Ensuring network default is active
	I0915 07:52:59.878006   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Ensuring network mk-kubernetes-upgrade-669362 is active
	I0915 07:52:59.878766   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Getting domain xml...
	I0915 07:52:59.879688   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Creating domain...
	I0915 07:53:01.231538   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Waiting to get IP...
	I0915 07:53:01.233114   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:01.234242   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find current IP address of domain kubernetes-upgrade-669362 in network mk-kubernetes-upgrade-669362
	I0915 07:53:01.234270   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:53:01.234242   52281 retry.go:31] will retry after 209.349968ms: waiting for machine to come up
	I0915 07:53:01.445846   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:01.446433   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find current IP address of domain kubernetes-upgrade-669362 in network mk-kubernetes-upgrade-669362
	I0915 07:53:01.446457   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:53:01.446370   52281 retry.go:31] will retry after 327.245399ms: waiting for machine to come up
	I0915 07:53:01.775739   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:01.776370   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find current IP address of domain kubernetes-upgrade-669362 in network mk-kubernetes-upgrade-669362
	I0915 07:53:01.776395   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:53:01.776288   52281 retry.go:31] will retry after 424.575052ms: waiting for machine to come up
	I0915 07:53:02.202625   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:02.203155   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find current IP address of domain kubernetes-upgrade-669362 in network mk-kubernetes-upgrade-669362
	I0915 07:53:02.203184   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:53:02.203110   52281 retry.go:31] will retry after 528.769392ms: waiting for machine to come up
	I0915 07:53:02.733685   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:02.734255   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find current IP address of domain kubernetes-upgrade-669362 in network mk-kubernetes-upgrade-669362
	I0915 07:53:02.734284   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:53:02.734195   52281 retry.go:31] will retry after 680.568655ms: waiting for machine to come up
	I0915 07:53:03.416380   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:03.417034   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find current IP address of domain kubernetes-upgrade-669362 in network mk-kubernetes-upgrade-669362
	I0915 07:53:03.417071   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:53:03.416951   52281 retry.go:31] will retry after 731.615088ms: waiting for machine to come up
	I0915 07:53:04.150936   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:04.151445   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find current IP address of domain kubernetes-upgrade-669362 in network mk-kubernetes-upgrade-669362
	I0915 07:53:04.151472   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:53:04.151401   52281 retry.go:31] will retry after 1.122713527s: waiting for machine to come up
	I0915 07:53:05.275843   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:05.276387   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find current IP address of domain kubernetes-upgrade-669362 in network mk-kubernetes-upgrade-669362
	I0915 07:53:05.276412   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:53:05.276348   52281 retry.go:31] will retry after 1.331009947s: waiting for machine to come up
	I0915 07:53:06.609043   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:06.609441   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find current IP address of domain kubernetes-upgrade-669362 in network mk-kubernetes-upgrade-669362
	I0915 07:53:06.609507   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:53:06.609428   52281 retry.go:31] will retry after 1.640160704s: waiting for machine to come up
	I0915 07:53:08.250842   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:08.251339   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find current IP address of domain kubernetes-upgrade-669362 in network mk-kubernetes-upgrade-669362
	I0915 07:53:08.251366   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:53:08.251288   52281 retry.go:31] will retry after 1.653971599s: waiting for machine to come up
	I0915 07:53:10.347560   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:10.348117   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find current IP address of domain kubernetes-upgrade-669362 in network mk-kubernetes-upgrade-669362
	I0915 07:53:10.348152   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:53:10.348082   52281 retry.go:31] will retry after 2.421310624s: waiting for machine to come up
	I0915 07:53:12.771006   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:12.771550   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find current IP address of domain kubernetes-upgrade-669362 in network mk-kubernetes-upgrade-669362
	I0915 07:53:12.771574   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:53:12.771488   52281 retry.go:31] will retry after 3.308795066s: waiting for machine to come up
	I0915 07:53:16.081648   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:16.082180   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find current IP address of domain kubernetes-upgrade-669362 in network mk-kubernetes-upgrade-669362
	I0915 07:53:16.082209   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:53:16.082119   52281 retry.go:31] will retry after 2.776414605s: waiting for machine to come up
	I0915 07:53:18.860685   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:18.861377   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find current IP address of domain kubernetes-upgrade-669362 in network mk-kubernetes-upgrade-669362
	I0915 07:53:18.861418   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | I0915 07:53:18.861321   52281 retry.go:31] will retry after 5.558446401s: waiting for machine to come up
	I0915 07:53:24.422895   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:24.423495   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Found IP for machine: 192.168.83.150
	I0915 07:53:24.423520   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Reserving static IP address...
	I0915 07:53:24.423548   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has current primary IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:24.423818   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-669362", mac: "52:54:00:62:f3:5f", ip: "192.168.83.150"} in network mk-kubernetes-upgrade-669362
	I0915 07:53:24.501030   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | Getting to WaitForSSH function...
	I0915 07:53:24.501068   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Reserved static IP address: 192.168.83.150
	I0915 07:53:24.501083   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Waiting for SSH to be available...
	I0915 07:53:24.503934   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:24.504366   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:24.504403   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:24.504511   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | Using SSH client type: external
	I0915 07:53:24.504539   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362/id_rsa (-rw-------)
	I0915 07:53:24.504595   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 07:53:24.504616   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | About to run SSH command:
	I0915 07:53:24.504632   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | exit 0
	I0915 07:53:24.634229   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | SSH cmd err, output: <nil>: 
	I0915 07:53:24.634516   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) KVM machine creation complete!
	I0915 07:53:24.634794   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetConfigRaw
	I0915 07:53:24.635312   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:53:24.635498   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:53:24.635634   52215 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0915 07:53:24.635651   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetState
	I0915 07:53:24.637007   52215 main.go:141] libmachine: Detecting operating system of created instance...
	I0915 07:53:24.637023   52215 main.go:141] libmachine: Waiting for SSH to be available...
	I0915 07:53:24.637030   52215 main.go:141] libmachine: Getting to WaitForSSH function...
	I0915 07:53:24.637036   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:53:24.639523   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:24.639927   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:24.639954   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:24.640049   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:53:24.640222   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:24.640387   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:24.640572   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:53:24.640723   52215 main.go:141] libmachine: Using SSH client type: native
	I0915 07:53:24.640909   52215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.150 22 <nil> <nil>}
	I0915 07:53:24.640920   52215 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0915 07:53:24.753457   52215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:53:24.753487   52215 main.go:141] libmachine: Detecting the provisioner...
	I0915 07:53:24.753499   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:53:24.756238   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:24.756569   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:24.756609   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:24.756732   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:53:24.756928   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:24.757095   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:24.757237   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:53:24.757361   52215 main.go:141] libmachine: Using SSH client type: native
	I0915 07:53:24.757547   52215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.150 22 <nil> <nil>}
	I0915 07:53:24.757560   52215 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0915 07:53:24.870655   52215 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0915 07:53:24.870724   52215 main.go:141] libmachine: found compatible host: buildroot
	I0915 07:53:24.870734   52215 main.go:141] libmachine: Provisioning with buildroot...
	I0915 07:53:24.870747   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetMachineName
	I0915 07:53:24.870995   52215 buildroot.go:166] provisioning hostname "kubernetes-upgrade-669362"
	I0915 07:53:24.871027   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetMachineName
	I0915 07:53:24.871213   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:53:24.874054   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:24.874388   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:24.874429   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:24.874552   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:53:24.874728   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:24.874852   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:24.874974   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:53:24.875125   52215 main.go:141] libmachine: Using SSH client type: native
	I0915 07:53:24.875293   52215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.150 22 <nil> <nil>}
	I0915 07:53:24.875304   52215 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-669362 && echo "kubernetes-upgrade-669362" | sudo tee /etc/hostname
	I0915 07:53:25.011068   52215 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-669362
	
	I0915 07:53:25.011104   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:53:25.014511   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.014874   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:25.014904   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.015133   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:53:25.015308   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:25.015495   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:25.015636   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:53:25.015798   52215 main.go:141] libmachine: Using SSH client type: native
	I0915 07:53:25.016014   52215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.150 22 <nil> <nil>}
	I0915 07:53:25.016042   52215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-669362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-669362/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-669362' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:53:25.139121   52215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:53:25.139148   52215 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 07:53:25.139167   52215 buildroot.go:174] setting up certificates
	I0915 07:53:25.139176   52215 provision.go:84] configureAuth start
	I0915 07:53:25.139187   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetMachineName
	I0915 07:53:25.139462   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetIP
	I0915 07:53:25.142071   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.142439   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:25.142467   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.142581   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:53:25.144768   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.145084   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:25.145107   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.145249   52215 provision.go:143] copyHostCerts
	I0915 07:53:25.145316   52215 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 07:53:25.145354   52215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:53:25.145422   52215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 07:53:25.145530   52215 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 07:53:25.145542   52215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:53:25.145586   52215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 07:53:25.145654   52215 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 07:53:25.145664   52215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:53:25.145691   52215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 07:53:25.145759   52215 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-669362 san=[127.0.0.1 192.168.83.150 kubernetes-upgrade-669362 localhost minikube]
	I0915 07:53:25.207582   52215 provision.go:177] copyRemoteCerts
	I0915 07:53:25.207641   52215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:53:25.207665   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:53:25.210665   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.211006   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:25.211035   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.211242   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:53:25.211430   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:25.211600   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:53:25.211750   52215 sshutil.go:53] new ssh client: &{IP:192.168.83.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362/id_rsa Username:docker}
	I0915 07:53:25.300245   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:53:25.328162   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0915 07:53:25.354728   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 07:53:25.381517   52215 provision.go:87] duration metric: took 242.329531ms to configureAuth
	I0915 07:53:25.381545   52215 buildroot.go:189] setting minikube options for container-runtime
	I0915 07:53:25.381723   52215 config.go:182] Loaded profile config "kubernetes-upgrade-669362": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0915 07:53:25.381819   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:53:25.384595   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.384879   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:25.384899   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.385093   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:53:25.385283   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:25.385460   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:25.385596   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:53:25.385771   52215 main.go:141] libmachine: Using SSH client type: native
	I0915 07:53:25.385962   52215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.150 22 <nil> <nil>}
	I0915 07:53:25.385980   52215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:53:25.618793   52215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:53:25.618835   52215 main.go:141] libmachine: Checking connection to Docker...
	I0915 07:53:25.618846   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetURL
	I0915 07:53:25.620355   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | Using libvirt version 6000000
	I0915 07:53:25.623236   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.623689   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:25.623722   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.624011   52215 main.go:141] libmachine: Docker is up and running!
	I0915 07:53:25.624036   52215 main.go:141] libmachine: Reticulating splines...
	I0915 07:53:25.624045   52215 client.go:171] duration metric: took 26.160401933s to LocalClient.Create
	I0915 07:53:25.624068   52215 start.go:167] duration metric: took 26.160473162s to libmachine.API.Create "kubernetes-upgrade-669362"
	I0915 07:53:25.624080   52215 start.go:293] postStartSetup for "kubernetes-upgrade-669362" (driver="kvm2")
	I0915 07:53:25.624093   52215 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:53:25.624116   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:53:25.624376   52215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:53:25.624421   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:53:25.627269   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.627684   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:25.627708   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.627869   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:53:25.628038   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:25.628201   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:53:25.628357   52215 sshutil.go:53] new ssh client: &{IP:192.168.83.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362/id_rsa Username:docker}
	I0915 07:53:25.718712   52215 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:53:25.723253   52215 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 07:53:25.723277   52215 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 07:53:25.723335   52215 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 07:53:25.723407   52215 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 07:53:25.723494   52215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:53:25.733550   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:53:25.757403   52215 start.go:296] duration metric: took 133.308756ms for postStartSetup
	I0915 07:53:25.757446   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetConfigRaw
	I0915 07:53:25.758060   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetIP
	I0915 07:53:25.760754   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.761087   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:25.761131   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.761322   52215 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/config.json ...
	I0915 07:53:25.761501   52215 start.go:128] duration metric: took 26.318448773s to createHost
	I0915 07:53:25.761521   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:53:25.763507   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.763816   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:25.763842   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.763942   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:53:25.764081   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:25.764215   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:25.764323   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:53:25.764429   52215 main.go:141] libmachine: Using SSH client type: native
	I0915 07:53:25.764582   52215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.150 22 <nil> <nil>}
	I0915 07:53:25.764591   52215 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 07:53:25.878828   52215 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726386805.855927416
	
	I0915 07:53:25.878850   52215 fix.go:216] guest clock: 1726386805.855927416
	I0915 07:53:25.878860   52215 fix.go:229] Guest: 2024-09-15 07:53:25.855927416 +0000 UTC Remote: 2024-09-15 07:53:25.761512285 +0000 UTC m=+31.781687951 (delta=94.415131ms)
	I0915 07:53:25.878910   52215 fix.go:200] guest clock delta is within tolerance: 94.415131ms
	I0915 07:53:25.878915   52215 start.go:83] releasing machines lock for "kubernetes-upgrade-669362", held for 26.436020143s
	I0915 07:53:25.878939   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:53:25.879194   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetIP
	I0915 07:53:25.882850   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.883217   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:25.883264   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.883421   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:53:25.883919   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:53:25.884094   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:53:25.884208   52215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:53:25.884256   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:53:25.884308   52215 ssh_runner.go:195] Run: cat /version.json
	I0915 07:53:25.884335   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:53:25.887115   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.887441   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.887501   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:25.887520   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.887616   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:53:25.887780   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:25.887911   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:25.887930   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:25.887942   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:53:25.888105   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:53:25.888108   52215 sshutil.go:53] new ssh client: &{IP:192.168.83.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362/id_rsa Username:docker}
	I0915 07:53:25.888268   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:53:25.888411   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:53:25.888559   52215 sshutil.go:53] new ssh client: &{IP:192.168.83.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362/id_rsa Username:docker}
	I0915 07:53:25.974878   52215 ssh_runner.go:195] Run: systemctl --version
	I0915 07:53:25.996902   52215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:53:26.162485   52215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 07:53:26.168282   52215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 07:53:26.168348   52215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:53:26.186086   52215 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 07:53:26.186178   52215 start.go:495] detecting cgroup driver to use...
	I0915 07:53:26.186267   52215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:53:26.204071   52215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:53:26.223049   52215 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:53:26.223173   52215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:53:26.238327   52215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:53:26.253521   52215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:53:26.370294   52215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:53:26.526286   52215 docker.go:233] disabling docker service ...
	I0915 07:53:26.526356   52215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:53:26.541590   52215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:53:26.554389   52215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:53:26.704822   52215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:53:26.852956   52215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:53:26.869707   52215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:53:26.888488   52215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0915 07:53:26.888549   52215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:53:26.898318   52215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:53:26.898379   52215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:53:26.908724   52215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:53:26.919071   52215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:53:26.929406   52215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:53:26.940919   52215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:53:26.951610   52215 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0915 07:53:26.951674   52215 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0915 07:53:26.965457   52215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:53:26.974810   52215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:53:27.114236   52215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:53:27.223040   52215 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:53:27.223108   52215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:53:27.228497   52215 start.go:563] Will wait 60s for crictl version
	I0915 07:53:27.228544   52215 ssh_runner.go:195] Run: which crictl
	I0915 07:53:27.232515   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:53:27.277905   52215 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 07:53:27.277989   52215 ssh_runner.go:195] Run: crio --version
	I0915 07:53:27.309187   52215 ssh_runner.go:195] Run: crio --version
	I0915 07:53:27.341337   52215 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0915 07:53:27.342856   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetIP
	I0915 07:53:27.345615   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:27.345987   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:53:15 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:53:27.346017   52215 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:53:27.346177   52215 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0915 07:53:27.350260   52215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:53:27.363644   52215 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-669362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-669362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.150 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 07:53:27.363768   52215 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0915 07:53:27.363813   52215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:53:27.402582   52215 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0915 07:53:27.402660   52215 ssh_runner.go:195] Run: which lz4
	I0915 07:53:27.406818   52215 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0915 07:53:27.410956   52215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0915 07:53:27.410983   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0915 07:53:29.040930   52215 crio.go:462] duration metric: took 1.634137483s to copy over tarball
	I0915 07:53:29.041015   52215 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0915 07:53:31.724855   52215 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.68380852s)
	I0915 07:53:31.724884   52215 crio.go:469] duration metric: took 2.683924109s to extract the tarball
	I0915 07:53:31.724893   52215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0915 07:53:31.773988   52215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:53:31.824364   52215 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0915 07:53:31.824388   52215 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0915 07:53:31.824462   52215 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0915 07:53:31.824492   52215 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0915 07:53:31.824517   52215 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0915 07:53:31.824524   52215 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0915 07:53:31.824542   52215 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0915 07:53:31.824469   52215 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 07:53:31.824561   52215 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0915 07:53:31.824493   52215 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0915 07:53:31.825862   52215 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0915 07:53:31.825983   52215 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0915 07:53:31.826014   52215 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0915 07:53:31.826046   52215 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0915 07:53:31.826014   52215 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0915 07:53:31.826162   52215 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 07:53:31.826359   52215 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0915 07:53:31.826441   52215 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0915 07:53:32.044515   52215 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0915 07:53:32.057215   52215 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0915 07:53:32.066274   52215 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0915 07:53:32.096375   52215 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0915 07:53:32.096483   52215 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0915 07:53:32.096537   52215 ssh_runner.go:195] Run: which crictl
	I0915 07:53:32.099481   52215 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0915 07:53:32.123264   52215 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0915 07:53:32.123298   52215 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0915 07:53:32.123334   52215 ssh_runner.go:195] Run: which crictl
	I0915 07:53:32.132348   52215 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0915 07:53:32.153917   52215 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0915 07:53:32.161980   52215 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0915 07:53:32.162028   52215 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0915 07:53:32.162052   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0915 07:53:32.162070   52215 ssh_runner.go:195] Run: which crictl
	I0915 07:53:32.180330   52215 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0915 07:53:32.187916   52215 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0915 07:53:32.187964   52215 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0915 07:53:32.187978   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0915 07:53:32.188041   52215 ssh_runner.go:195] Run: which crictl
	I0915 07:53:32.266028   52215 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0915 07:53:32.266071   52215 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0915 07:53:32.266120   52215 ssh_runner.go:195] Run: which crictl
	I0915 07:53:32.271704   52215 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0915 07:53:32.271746   52215 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0915 07:53:32.271782   52215 ssh_runner.go:195] Run: which crictl
	I0915 07:53:32.271786   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0915 07:53:32.271803   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0915 07:53:32.315495   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0915 07:53:32.315558   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0915 07:53:32.315582   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0915 07:53:32.315899   52215 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0915 07:53:32.315923   52215 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0915 07:53:32.315953   52215 ssh_runner.go:195] Run: which crictl
	I0915 07:53:32.355818   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0915 07:53:32.355947   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0915 07:53:32.366052   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0915 07:53:32.451201   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0915 07:53:32.451271   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0915 07:53:32.458141   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0915 07:53:32.458145   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0915 07:53:32.487443   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0915 07:53:32.492284   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0915 07:53:32.509002   52215 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0915 07:53:32.630314   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0915 07:53:32.630418   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0915 07:53:32.639775   52215 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0915 07:53:32.639824   52215 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0915 07:53:32.639866   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0915 07:53:32.646036   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0915 07:53:32.740331   52215 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0915 07:53:32.743792   52215 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0915 07:53:32.743817   52215 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0915 07:53:32.743843   52215 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0915 07:53:32.783373   52215 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0915 07:53:33.086013   52215 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 07:53:33.229245   52215 cache_images.go:92] duration metric: took 1.404839703s to LoadCachedImages
	W0915 07:53:33.229341   52215 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0915 07:53:33.229356   52215 kubeadm.go:934] updating node { 192.168.83.150 8443 v1.20.0 crio true true} ...
	I0915 07:53:33.229465   52215 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-669362 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-669362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:53:33.229559   52215 ssh_runner.go:195] Run: crio config
	I0915 07:53:33.286254   52215 cni.go:84] Creating CNI manager for ""
	I0915 07:53:33.286276   52215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 07:53:33.286286   52215 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 07:53:33.286303   52215 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.150 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-669362 NodeName:kubernetes-upgrade-669362 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0915 07:53:33.286441   52215 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-669362"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 07:53:33.286498   52215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0915 07:53:33.298236   52215 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 07:53:33.298310   52215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 07:53:33.308867   52215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0915 07:53:33.328266   52215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:53:33.346879   52215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0915 07:53:33.367573   52215 ssh_runner.go:195] Run: grep 192.168.83.150	control-plane.minikube.internal$ /etc/hosts
	I0915 07:53:33.371673   52215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:53:33.386109   52215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:53:33.530122   52215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:53:33.549277   52215 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362 for IP: 192.168.83.150
	I0915 07:53:33.549306   52215 certs.go:194] generating shared ca certs ...
	I0915 07:53:33.549327   52215 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:53:33.549513   52215 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 07:53:33.549575   52215 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 07:53:33.549590   52215 certs.go:256] generating profile certs ...
	I0915 07:53:33.549665   52215 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/client.key
	I0915 07:53:33.549683   52215 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/client.crt with IP's: []
	I0915 07:53:33.689621   52215 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/client.crt ...
	I0915 07:53:33.689653   52215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/client.crt: {Name:mkb11ccf15d2bb553e29f4c0c795d568dc9a8d31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:53:33.689832   52215 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/client.key ...
	I0915 07:53:33.689849   52215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/client.key: {Name:mk86513ffe1d058b7e4d9a309aa7f8a8133d0dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:53:33.689948   52215 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.key.540c6cb2
	I0915 07:53:33.689972   52215 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.crt.540c6cb2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.150]
	I0915 07:53:34.097330   52215 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.crt.540c6cb2 ...
	I0915 07:53:34.097360   52215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.crt.540c6cb2: {Name:mk6b409675639b8c776b3fda245cd574ea033130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:53:34.097532   52215 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.key.540c6cb2 ...
	I0915 07:53:34.097547   52215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.key.540c6cb2: {Name:mk996c588102c3897782a89f4e46e07af2cb5024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:53:34.097642   52215 certs.go:381] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.crt.540c6cb2 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.crt
	I0915 07:53:34.097736   52215 certs.go:385] copying /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.key.540c6cb2 -> /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.key
	I0915 07:53:34.097792   52215 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/proxy-client.key
	I0915 07:53:34.097862   52215 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/proxy-client.crt with IP's: []
	I0915 07:53:34.324779   52215 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/proxy-client.crt ...
	I0915 07:53:34.324822   52215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/proxy-client.crt: {Name:mk42abbdd3bd6d1e4d78a7896b33295885e537de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:53:34.325032   52215 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/proxy-client.key ...
	I0915 07:53:34.325061   52215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/proxy-client.key: {Name:mkbc006545f1f8b30c9700e2757ef0f682ac1250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:53:34.325352   52215 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 07:53:34.325409   52215 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 07:53:34.325423   52215 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:53:34.325456   52215 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:53:34.325503   52215 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:53:34.325538   52215 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 07:53:34.325601   52215 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:53:34.326599   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:53:34.356960   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:53:34.381959   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:53:34.421057   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 07:53:34.449308   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0915 07:53:34.474149   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 07:53:34.501701   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 07:53:34.530827   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 07:53:34.554775   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 07:53:34.580219   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 07:53:34.611551   52215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:53:34.636831   52215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 07:53:34.655494   52215 ssh_runner.go:195] Run: openssl version
	I0915 07:53:34.661535   52215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 07:53:34.675072   52215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 07:53:34.680375   52215 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 07:53:34.680455   52215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 07:53:34.686501   52215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:53:34.698551   52215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:53:34.710044   52215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:53:34.715001   52215 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:53:34.715060   52215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:53:34.721298   52215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:53:34.734026   52215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 07:53:34.747757   52215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 07:53:34.752997   52215 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 07:53:34.753060   52215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 07:53:34.759660   52215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 07:53:34.773648   52215 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:53:34.778460   52215 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 07:53:34.778525   52215 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-669362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-669362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.150 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:53:34.778625   52215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 07:53:34.778702   52215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 07:53:34.817079   52215 cri.go:89] found id: ""
	I0915 07:53:34.817227   52215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 07:53:34.828178   52215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 07:53:34.838917   52215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 07:53:34.850547   52215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 07:53:34.850574   52215 kubeadm.go:157] found existing configuration files:
	
	I0915 07:53:34.850625   52215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 07:53:34.862218   52215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 07:53:34.862311   52215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 07:53:34.872661   52215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 07:53:34.883847   52215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 07:53:34.883916   52215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 07:53:34.895192   52215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 07:53:34.906349   52215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 07:53:34.906418   52215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 07:53:34.916726   52215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 07:53:34.927124   52215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 07:53:34.927196   52215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 07:53:34.938602   52215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 07:53:35.062267   52215 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0915 07:53:35.062322   52215 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 07:53:35.224116   52215 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 07:53:35.224421   52215 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 07:53:35.224682   52215 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0915 07:53:35.453618   52215 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 07:53:35.586546   52215 out.go:235]   - Generating certificates and keys ...
	I0915 07:53:35.586669   52215 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 07:53:35.586760   52215 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 07:53:35.644361   52215 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 07:53:35.781926   52215 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 07:53:35.858484   52215 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 07:53:35.967428   52215 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 07:53:36.076754   52215 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 07:53:36.077040   52215 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-669362 localhost] and IPs [192.168.83.150 127.0.0.1 ::1]
	I0915 07:53:36.296459   52215 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 07:53:36.296663   52215 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-669362 localhost] and IPs [192.168.83.150 127.0.0.1 ::1]
	I0915 07:53:36.456105   52215 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 07:53:36.553781   52215 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 07:53:36.790153   52215 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 07:53:36.790316   52215 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 07:53:36.878559   52215 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 07:53:36.997520   52215 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 07:53:37.051617   52215 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 07:53:37.508398   52215 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 07:53:37.527478   52215 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 07:53:37.528538   52215 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 07:53:37.528619   52215 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 07:53:37.679957   52215 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 07:53:37.683071   52215 out.go:235]   - Booting up control plane ...
	I0915 07:53:37.683212   52215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 07:53:37.688594   52215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 07:53:37.689582   52215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 07:53:37.699075   52215 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 07:53:37.704380   52215 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0915 07:54:17.698746   52215 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0915 07:54:17.699330   52215 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 07:54:17.699609   52215 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 07:54:22.699845   52215 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 07:54:22.700106   52215 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 07:54:32.699466   52215 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 07:54:32.699683   52215 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 07:54:52.699257   52215 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 07:54:52.699476   52215 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 07:55:32.700534   52215 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 07:55:32.700817   52215 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 07:55:32.700841   52215 kubeadm.go:310] 
	I0915 07:55:32.700891   52215 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0915 07:55:32.700944   52215 kubeadm.go:310] 		timed out waiting for the condition
	I0915 07:55:32.700955   52215 kubeadm.go:310] 
	I0915 07:55:32.700994   52215 kubeadm.go:310] 	This error is likely caused by:
	I0915 07:55:32.701033   52215 kubeadm.go:310] 		- The kubelet is not running
	I0915 07:55:32.701161   52215 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0915 07:55:32.701174   52215 kubeadm.go:310] 
	I0915 07:55:32.701315   52215 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0915 07:55:32.701381   52215 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0915 07:55:32.701437   52215 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0915 07:55:32.701444   52215 kubeadm.go:310] 
	I0915 07:55:32.701579   52215 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0915 07:55:32.701697   52215 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0915 07:55:32.701708   52215 kubeadm.go:310] 
	I0915 07:55:32.701848   52215 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0915 07:55:32.701941   52215 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0915 07:55:32.702000   52215 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0915 07:55:32.702054   52215 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0915 07:55:32.702058   52215 kubeadm.go:310] 
	I0915 07:55:32.704201   52215 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 07:55:32.704314   52215 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0915 07:55:32.704400   52215 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0915 07:55:32.704529   52215 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-669362 localhost] and IPs [192.168.83.150 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-669362 localhost] and IPs [192.168.83.150 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-669362 localhost] and IPs [192.168.83.150 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-669362 localhost] and IPs [192.168.83.150 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0915 07:55:32.704578   52215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0915 07:55:34.047172   52215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.342560791s)
	I0915 07:55:34.047273   52215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:55:34.067459   52215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 07:55:34.078462   52215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 07:55:34.078489   52215 kubeadm.go:157] found existing configuration files:
	
	I0915 07:55:34.078543   52215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 07:55:34.089229   52215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 07:55:34.089352   52215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 07:55:34.099933   52215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 07:55:34.110943   52215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 07:55:34.111016   52215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 07:55:34.122037   52215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 07:55:34.132637   52215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 07:55:34.132707   52215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 07:55:34.144705   52215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 07:55:34.156613   52215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 07:55:34.156690   52215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 07:55:34.167997   52215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 07:55:34.414920   52215 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 07:57:30.660109   52215 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0915 07:57:30.660221   52215 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0915 07:57:30.662016   52215 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0915 07:57:30.662104   52215 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 07:57:30.662197   52215 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 07:57:30.662290   52215 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 07:57:30.662431   52215 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0915 07:57:30.662540   52215 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 07:57:30.664455   52215 out.go:235]   - Generating certificates and keys ...
	I0915 07:57:30.664555   52215 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 07:57:30.664635   52215 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 07:57:30.664725   52215 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0915 07:57:30.664777   52215 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0915 07:57:30.664849   52215 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0915 07:57:30.664905   52215 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0915 07:57:30.664963   52215 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0915 07:57:30.665046   52215 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0915 07:57:30.665146   52215 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0915 07:57:30.665236   52215 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0915 07:57:30.665268   52215 kubeadm.go:310] [certs] Using the existing "sa" key
	I0915 07:57:30.665341   52215 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 07:57:30.665421   52215 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 07:57:30.665509   52215 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 07:57:30.665623   52215 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 07:57:30.665710   52215 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 07:57:30.665861   52215 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 07:57:30.665976   52215 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 07:57:30.666030   52215 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 07:57:30.666121   52215 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 07:57:30.667698   52215 out.go:235]   - Booting up control plane ...
	I0915 07:57:30.667806   52215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 07:57:30.667890   52215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 07:57:30.667977   52215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 07:57:30.668095   52215 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 07:57:30.668283   52215 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0915 07:57:30.668355   52215 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0915 07:57:30.668478   52215 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 07:57:30.668728   52215 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 07:57:30.668811   52215 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 07:57:30.668994   52215 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 07:57:30.669075   52215 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 07:57:30.669306   52215 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 07:57:30.669381   52215 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 07:57:30.669624   52215 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 07:57:30.669725   52215 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 07:57:30.669921   52215 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 07:57:30.669938   52215 kubeadm.go:310] 
	I0915 07:57:30.669979   52215 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0915 07:57:30.670036   52215 kubeadm.go:310] 		timed out waiting for the condition
	I0915 07:57:30.670042   52215 kubeadm.go:310] 
	I0915 07:57:30.670091   52215 kubeadm.go:310] 	This error is likely caused by:
	I0915 07:57:30.670138   52215 kubeadm.go:310] 		- The kubelet is not running
	I0915 07:57:30.670270   52215 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0915 07:57:30.670278   52215 kubeadm.go:310] 
	I0915 07:57:30.670404   52215 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0915 07:57:30.670460   52215 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0915 07:57:30.670522   52215 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0915 07:57:30.670541   52215 kubeadm.go:310] 
	I0915 07:57:30.670683   52215 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0915 07:57:30.670803   52215 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0915 07:57:30.670815   52215 kubeadm.go:310] 
	I0915 07:57:30.670955   52215 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0915 07:57:30.671028   52215 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0915 07:57:30.671132   52215 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0915 07:57:30.671252   52215 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0915 07:57:30.671305   52215 kubeadm.go:310] 
	I0915 07:57:30.671332   52215 kubeadm.go:394] duration metric: took 3m55.892812295s to StartCluster
	I0915 07:57:30.671388   52215 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 07:57:30.671450   52215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 07:57:30.713477   52215 cri.go:89] found id: ""
	I0915 07:57:30.713503   52215 logs.go:276] 0 containers: []
	W0915 07:57:30.713513   52215 logs.go:278] No container was found matching "kube-apiserver"
	I0915 07:57:30.713521   52215 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 07:57:30.713589   52215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 07:57:30.754303   52215 cri.go:89] found id: ""
	I0915 07:57:30.754341   52215 logs.go:276] 0 containers: []
	W0915 07:57:30.754365   52215 logs.go:278] No container was found matching "etcd"
	I0915 07:57:30.754375   52215 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 07:57:30.754445   52215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 07:57:30.789328   52215 cri.go:89] found id: ""
	I0915 07:57:30.789373   52215 logs.go:276] 0 containers: []
	W0915 07:57:30.789383   52215 logs.go:278] No container was found matching "coredns"
	I0915 07:57:30.789392   52215 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 07:57:30.789462   52215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 07:57:30.824775   52215 cri.go:89] found id: ""
	I0915 07:57:30.824808   52215 logs.go:276] 0 containers: []
	W0915 07:57:30.824820   52215 logs.go:278] No container was found matching "kube-scheduler"
	I0915 07:57:30.824828   52215 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 07:57:30.824891   52215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 07:57:30.858882   52215 cri.go:89] found id: ""
	I0915 07:57:30.858912   52215 logs.go:276] 0 containers: []
	W0915 07:57:30.858924   52215 logs.go:278] No container was found matching "kube-proxy"
	I0915 07:57:30.858931   52215 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 07:57:30.858996   52215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 07:57:30.896698   52215 cri.go:89] found id: ""
	I0915 07:57:30.896725   52215 logs.go:276] 0 containers: []
	W0915 07:57:30.896733   52215 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 07:57:30.896739   52215 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 07:57:30.896796   52215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 07:57:30.935563   52215 cri.go:89] found id: ""
	I0915 07:57:30.935590   52215 logs.go:276] 0 containers: []
	W0915 07:57:30.935600   52215 logs.go:278] No container was found matching "kindnet"
	I0915 07:57:30.935611   52215 logs.go:123] Gathering logs for kubelet ...
	I0915 07:57:30.935625   52215 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 07:57:30.999459   52215 logs.go:123] Gathering logs for dmesg ...
	I0915 07:57:30.999493   52215 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 07:57:31.014769   52215 logs.go:123] Gathering logs for describe nodes ...
	I0915 07:57:31.014798   52215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 07:57:31.140908   52215 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 07:57:31.140931   52215 logs.go:123] Gathering logs for CRI-O ...
	I0915 07:57:31.140945   52215 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 07:57:31.261388   52215 logs.go:123] Gathering logs for container status ...
	I0915 07:57:31.261427   52215 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0915 07:57:31.303225   52215 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0915 07:57:31.303309   52215 out.go:270] * 
	* 
	W0915 07:57:31.303375   52215 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0915 07:57:31.303392   52215 out.go:270] * 
	* 
	W0915 07:57:31.304620   52215 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 07:57:31.308092   52215 out.go:201] 
	W0915 07:57:31.309449   52215 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0915 07:57:31.309521   52215 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0915 07:57:31.309551   52215 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0915 07:57:31.311230   52215 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-669362 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-669362
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-669362: (1.437869575s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-669362 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-669362 status --format={{.Host}}: exit status 7 (75.510056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-669362 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0915 07:57:56.196989   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-669362 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m39.496092218s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-669362 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-669362 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-669362 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (82.998629ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-669362] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-669362
	    minikube start -p kubernetes-upgrade-669362 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6693622 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-669362 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-669362 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-669362 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (14m1.049698365s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-669362] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "kubernetes-upgrade-669362" primary control-plane node in "kubernetes-upgrade-669362" cluster
	* Updating the running kvm2 "kubernetes-upgrade-669362" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:59:12.569056   60028 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:59:12.569159   60028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:59:12.569170   60028 out.go:358] Setting ErrFile to fd 2...
	I0915 07:59:12.569176   60028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:59:12.569400   60028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:59:12.570016   60028 out.go:352] Setting JSON to false
	I0915 07:59:12.570972   60028 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6099,"bootTime":1726381054,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 07:59:12.571092   60028 start.go:139] virtualization: kvm guest
	I0915 07:59:12.573262   60028 out.go:177] * [kubernetes-upgrade-669362] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 07:59:12.574858   60028 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:59:12.574877   60028 notify.go:220] Checking for updates...
	I0915 07:59:12.577790   60028 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:59:12.579301   60028 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:59:12.580961   60028 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:59:12.582352   60028 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 07:59:12.583776   60028 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:59:12.585431   60028 config.go:182] Loaded profile config "kubernetes-upgrade-669362": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:59:12.586025   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:59:12.586096   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:59:12.602617   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0915 07:59:12.603164   60028 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:59:12.603724   60028 main.go:141] libmachine: Using API Version  1
	I0915 07:59:12.603747   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:59:12.604080   60028 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:59:12.604287   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:59:12.604534   60028 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:59:12.604838   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:59:12.604871   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:59:12.619635   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41963
	I0915 07:59:12.620170   60028 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:59:12.620632   60028 main.go:141] libmachine: Using API Version  1
	I0915 07:59:12.620664   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:59:12.621068   60028 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:59:12.621278   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:59:12.658132   60028 out.go:177] * Using the kvm2 driver based on existing profile
	I0915 07:59:12.659698   60028 start.go:297] selected driver: kvm2
	I0915 07:59:12.659723   60028 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-669362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-669362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:59:12.659873   60028 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:59:12.660948   60028 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:59:12.661054   60028 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 07:59:12.676583   60028 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 07:59:12.676963   60028 cni.go:84] Creating CNI manager for ""
	I0915 07:59:12.677009   60028 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 07:59:12.677047   60028 start.go:340] cluster config:
	{Name:kubernetes-upgrade-669362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-669362 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:59:12.677141   60028 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:59:12.678953   60028 out.go:177] * Starting "kubernetes-upgrade-669362" primary control-plane node in "kubernetes-upgrade-669362" cluster
	I0915 07:59:12.680432   60028 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:59:12.680471   60028 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 07:59:12.680478   60028 cache.go:56] Caching tarball of preloaded images
	I0915 07:59:12.680610   60028 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:59:12.680624   60028 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:59:12.680709   60028 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/config.json ...
	I0915 07:59:12.680903   60028 start.go:360] acquireMachinesLock for kubernetes-upgrade-669362: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:59:18.514574   60028 start.go:364] duration metric: took 5.833616289s to acquireMachinesLock for "kubernetes-upgrade-669362"
	I0915 07:59:18.514630   60028 start.go:96] Skipping create...Using existing machine configuration
	I0915 07:59:18.514641   60028 fix.go:54] fixHost starting: 
	I0915 07:59:18.515056   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:59:18.515105   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:59:18.530932   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0915 07:59:18.531385   60028 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:59:18.531891   60028 main.go:141] libmachine: Using API Version  1
	I0915 07:59:18.531913   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:59:18.532260   60028 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:59:18.532450   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:59:18.532606   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetState
	I0915 07:59:18.534185   60028 fix.go:112] recreateIfNeeded on kubernetes-upgrade-669362: state=Running err=<nil>
	W0915 07:59:18.534211   60028 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 07:59:18.536328   60028 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-669362" VM ...
	I0915 07:59:18.537711   60028 machine.go:93] provisionDockerMachine start ...
	I0915 07:59:18.537737   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:59:18.537952   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:59:18.540524   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:18.540920   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:58:44 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:59:18.540948   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:18.541148   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:59:18.541310   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:59:18.541467   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:59:18.541583   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:59:18.541770   60028 main.go:141] libmachine: Using SSH client type: native
	I0915 07:59:18.542020   60028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.150 22 <nil> <nil>}
	I0915 07:59:18.542034   60028 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 07:59:18.668097   60028 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-669362
	
	I0915 07:59:18.668136   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetMachineName
	I0915 07:59:18.668412   60028 buildroot.go:166] provisioning hostname "kubernetes-upgrade-669362"
	I0915 07:59:18.668446   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetMachineName
	I0915 07:59:18.668611   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:59:18.671809   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:18.672168   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:58:44 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:59:18.672209   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:18.672314   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:59:18.672502   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:59:18.672676   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:59:18.672823   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:59:18.672999   60028 main.go:141] libmachine: Using SSH client type: native
	I0915 07:59:18.673232   60028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.150 22 <nil> <nil>}
	I0915 07:59:18.673248   60028 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-669362 && echo "kubernetes-upgrade-669362" | sudo tee /etc/hostname
	I0915 07:59:18.809558   60028 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-669362
	
	I0915 07:59:18.809586   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:59:18.812576   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:18.812978   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:58:44 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:59:18.813010   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:18.813296   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:59:18.813480   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:59:18.813672   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:59:18.813848   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:59:18.814008   60028 main.go:141] libmachine: Using SSH client type: native
	I0915 07:59:18.814260   60028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.150 22 <nil> <nil>}
	I0915 07:59:18.814287   60028 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-669362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-669362/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-669362' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:59:18.934864   60028 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:59:18.934894   60028 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 07:59:18.934940   60028 buildroot.go:174] setting up certificates
	I0915 07:59:18.934957   60028 provision.go:84] configureAuth start
	I0915 07:59:18.934988   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetMachineName
	I0915 07:59:18.935252   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetIP
	I0915 07:59:18.937953   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:18.938347   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:58:44 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:59:18.938393   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:18.938559   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:59:18.941022   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:18.941381   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:58:44 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:59:18.941408   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:18.941581   60028 provision.go:143] copyHostCerts
	I0915 07:59:18.941652   60028 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 07:59:18.941665   60028 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:59:18.941732   60028 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 07:59:18.941902   60028 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 07:59:18.941916   60028 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:59:18.941949   60028 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 07:59:18.942052   60028 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 07:59:18.942063   60028 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:59:18.942094   60028 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 07:59:18.942188   60028 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-669362 san=[127.0.0.1 192.168.83.150 kubernetes-upgrade-669362 localhost minikube]
	I0915 07:59:19.215013   60028 provision.go:177] copyRemoteCerts
	I0915 07:59:19.215098   60028 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:59:19.215131   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:59:19.218000   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:19.218379   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:58:44 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:59:19.218412   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:19.218582   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:59:19.218813   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:59:19.218989   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:59:19.219150   60028 sshutil.go:53] new ssh client: &{IP:192.168.83.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362/id_rsa Username:docker}
	I0915 07:59:19.316103   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:59:19.350097   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0915 07:59:19.378867   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 07:59:19.409381   60028 provision.go:87] duration metric: took 474.406642ms to configureAuth
	I0915 07:59:19.409425   60028 buildroot.go:189] setting minikube options for container-runtime
	I0915 07:59:19.409642   60028 config.go:182] Loaded profile config "kubernetes-upgrade-669362": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:59:19.409734   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:59:19.412801   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:19.413335   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:58:44 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:59:19.413359   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:19.413613   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:59:19.413833   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:59:19.413985   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:59:19.414148   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:59:19.414330   60028 main.go:141] libmachine: Using SSH client type: native
	I0915 07:59:19.414544   60028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.150 22 <nil> <nil>}
	I0915 07:59:19.414578   60028 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:59:27.480349   60028 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:59:27.480376   60028 machine.go:96] duration metric: took 8.942648121s to provisionDockerMachine
	I0915 07:59:27.480388   60028 start.go:293] postStartSetup for "kubernetes-upgrade-669362" (driver="kvm2")
	I0915 07:59:27.480399   60028 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:59:27.480423   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:59:27.480736   60028 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:59:27.480762   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:59:27.483538   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:27.483858   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:58:44 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:59:27.483886   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:27.484041   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:59:27.484224   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:59:27.484424   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:59:27.484560   60028 sshutil.go:53] new ssh client: &{IP:192.168.83.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362/id_rsa Username:docker}
	I0915 07:59:27.606123   60028 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:59:27.622037   60028 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 07:59:27.622075   60028 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 07:59:27.622167   60028 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 07:59:27.622289   60028 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 07:59:27.622424   60028 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:59:27.646158   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:59:27.711248   60028 start.go:296] duration metric: took 230.845136ms for postStartSetup
	I0915 07:59:27.711290   60028 fix.go:56] duration metric: took 9.196649587s for fixHost
	I0915 07:59:27.711317   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:59:27.713973   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:27.714324   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:58:44 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:59:27.714352   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:27.714524   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:59:27.714738   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:59:27.714885   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:59:27.715004   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:59:27.715183   60028 main.go:141] libmachine: Using SSH client type: native
	I0915 07:59:27.715382   60028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.150 22 <nil> <nil>}
	I0915 07:59:27.715393   60028 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 07:59:27.898614   60028 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726387167.889811081
	
	I0915 07:59:27.898636   60028 fix.go:216] guest clock: 1726387167.889811081
	I0915 07:59:27.898645   60028 fix.go:229] Guest: 2024-09-15 07:59:27.889811081 +0000 UTC Remote: 2024-09-15 07:59:27.711295202 +0000 UTC m=+15.181767819 (delta=178.515879ms)
	I0915 07:59:27.898681   60028 fix.go:200] guest clock delta is within tolerance: 178.515879ms
	I0915 07:59:27.898692   60028 start.go:83] releasing machines lock for "kubernetes-upgrade-669362", held for 9.384085491s
	I0915 07:59:27.898712   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:59:27.898965   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetIP
	I0915 07:59:27.902284   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:27.902747   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:58:44 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:59:27.902778   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:27.902926   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:59:27.903415   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:59:27.903609   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .DriverName
	I0915 07:59:27.903722   60028 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:59:27.903775   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:59:27.903850   60028 ssh_runner.go:195] Run: cat /version.json
	I0915 07:59:27.903877   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHHostname
	I0915 07:59:27.906959   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:27.907187   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:27.907381   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:58:44 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:59:27.907412   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:27.907506   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:59:27.907659   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:58:44 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 07:59:27.907683   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 07:59:27.907715   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:59:27.907846   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHPort
	I0915 07:59:27.907996   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHKeyPath
	I0915 07:59:27.907999   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:59:27.908160   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetSSHUsername
	I0915 07:59:27.908166   60028 sshutil.go:53] new ssh client: &{IP:192.168.83.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362/id_rsa Username:docker}
	I0915 07:59:27.908300   60028 sshutil.go:53] new ssh client: &{IP:192.168.83.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/kubernetes-upgrade-669362/id_rsa Username:docker}
	I0915 07:59:28.132004   60028 ssh_runner.go:195] Run: systemctl --version
	I0915 07:59:28.213726   60028 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:59:28.595849   60028 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 07:59:28.729523   60028 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 07:59:28.729593   60028 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:59:28.762184   60028 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0915 07:59:28.762214   60028 start.go:495] detecting cgroup driver to use...
	I0915 07:59:28.762293   60028 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:59:28.795548   60028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:59:28.824068   60028 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:59:28.824135   60028 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:59:28.857051   60028 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:59:28.876967   60028 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:59:29.115685   60028 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:59:29.320736   60028 docker.go:233] disabling docker service ...
	I0915 07:59:29.320812   60028 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:59:29.341483   60028 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:59:29.399000   60028 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:59:29.637602   60028 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:59:29.885183   60028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:59:29.903581   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:59:29.930252   60028 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:59:29.930328   60028 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:59:29.950418   60028 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:59:29.950495   60028 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:59:29.967868   60028 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:59:29.981861   60028 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:59:29.999070   60028 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:59:30.014986   60028 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:59:30.028487   60028 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:59:30.051739   60028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:59:30.064679   60028 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:59:30.084209   60028 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:59:30.103702   60028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:59:30.333833   60028 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 08:01:00.929474   60028 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.595607957s)
	I0915 08:01:00.929503   60028 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 08:01:00.929569   60028 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 08:01:00.935260   60028 start.go:563] Will wait 60s for crictl version
	I0915 08:01:00.935313   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:01:00.939257   60028 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 08:01:00.987095   60028 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 08:01:00.987178   60028 ssh_runner.go:195] Run: crio --version
	I0915 08:01:01.017370   60028 ssh_runner.go:195] Run: crio --version
	I0915 08:01:01.051088   60028 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 08:01:01.052314   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) Calling .GetIP
	I0915 08:01:01.054923   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 08:01:01.055429   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:f3:5f", ip: ""} in network mk-kubernetes-upgrade-669362: {Iface:virbr3 ExpiryTime:2024-09-15 08:58:44 +0000 UTC Type:0 Mac:52:54:00:62:f3:5f Iaid: IPaddr:192.168.83.150 Prefix:24 Hostname:kubernetes-upgrade-669362 Clientid:01:52:54:00:62:f3:5f}
	I0915 08:01:01.055453   60028 main.go:141] libmachine: (kubernetes-upgrade-669362) DBG | domain kubernetes-upgrade-669362 has defined IP address 192.168.83.150 and MAC address 52:54:00:62:f3:5f in network mk-kubernetes-upgrade-669362
	I0915 08:01:01.055655   60028 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0915 08:01:01.060261   60028 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-669362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-669362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 08:01:01.060374   60028 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 08:01:01.060433   60028 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 08:01:01.103565   60028 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 08:01:01.103593   60028 crio.go:433] Images already preloaded, skipping extraction
	I0915 08:01:01.103647   60028 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 08:01:01.142704   60028 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 08:01:01.142729   60028 cache_images.go:84] Images are preloaded, skipping loading
	I0915 08:01:01.142739   60028 kubeadm.go:934] updating node { 192.168.83.150 8443 v1.31.1 crio true true} ...
	I0915 08:01:01.142868   60028 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-669362 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-669362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 08:01:01.142950   60028 ssh_runner.go:195] Run: crio config
	I0915 08:01:01.190362   60028 cni.go:84] Creating CNI manager for ""
	I0915 08:01:01.190386   60028 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 08:01:01.190395   60028 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 08:01:01.190413   60028 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.150 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-669362 NodeName:kubernetes-upgrade-669362 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 08:01:01.190559   60028 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-669362"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 08:01:01.190618   60028 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 08:01:01.201558   60028 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 08:01:01.201628   60028 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 08:01:01.211868   60028 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0915 08:01:01.229958   60028 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 08:01:01.247825   60028 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0915 08:01:01.265782   60028 ssh_runner.go:195] Run: grep 192.168.83.150	control-plane.minikube.internal$ /etc/hosts
	I0915 08:01:01.270422   60028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 08:01:01.428878   60028 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 08:01:01.445353   60028 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362 for IP: 192.168.83.150
	I0915 08:01:01.445381   60028 certs.go:194] generating shared ca certs ...
	I0915 08:01:01.445401   60028 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 08:01:01.445603   60028 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 08:01:01.445667   60028 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 08:01:01.445681   60028 certs.go:256] generating profile certs ...
	I0915 08:01:01.445764   60028 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/client.key
	I0915 08:01:01.445889   60028 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.key.540c6cb2
	I0915 08:01:01.445927   60028 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/proxy-client.key
	I0915 08:01:01.446032   60028 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 08:01:01.446061   60028 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 08:01:01.446094   60028 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 08:01:01.446120   60028 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 08:01:01.446143   60028 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 08:01:01.446169   60028 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 08:01:01.446209   60028 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 08:01:01.446890   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 08:01:01.472928   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 08:01:01.497959   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 08:01:01.526323   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 08:01:01.556024   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0915 08:01:01.584115   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 08:01:01.611908   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 08:01:01.640233   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/kubernetes-upgrade-669362/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 08:01:01.669314   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 08:01:01.695562   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 08:01:01.722180   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 08:01:01.747496   60028 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 08:01:01.764912   60028 ssh_runner.go:195] Run: openssl version
	I0915 08:01:01.771606   60028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 08:01:01.782969   60028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 08:01:01.787577   60028 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 08:01:01.787637   60028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 08:01:01.793735   60028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 08:01:01.804149   60028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 08:01:01.815141   60028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 08:01:01.820170   60028 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 08:01:01.820237   60028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 08:01:01.826528   60028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 08:01:01.837159   60028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 08:01:01.849445   60028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 08:01:01.854959   60028 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 08:01:01.855025   60028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 08:01:01.861239   60028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 08:01:01.872053   60028 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 08:01:01.877294   60028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 08:01:01.884030   60028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 08:01:01.890770   60028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 08:01:01.896893   60028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 08:01:01.903213   60028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 08:01:01.909714   60028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 08:01:01.916698   60028 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-669362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-669362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 08:01:01.916796   60028 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 08:01:01.916860   60028 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 08:01:01.960959   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:01:01.960985   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:01:01.960991   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:01:01.960996   60028 cri.go:89] found id: "b0087be507aa3d263d6fb01491f57a7bc66108670860b624048cf783013a633c"
	I0915 08:01:01.961014   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:01:01.961019   60028 cri.go:89] found id: "805e023c43c1bd4674461b5091dfecbc7abcd74b60a6f58c3336705bbd428957"
	I0915 08:01:01.961023   60028 cri.go:89] found id: "a46a3013aca4d0567179ef559df90ed337f20bdbd6ecf968242fdb4435cf6807"
	I0915 08:01:01.961027   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:01:01.961031   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:01:01.961039   60028 cri.go:89] found id: "6ce26fae41fd0f61458771f8b99e920440b4c8eeb7922678eaec98518e5889fe"
	I0915 08:01:01.961044   60028 cri.go:89] found id: "ab8b83c654e271eaedbd67d4700d23ec63e0db4c531b919fe44a4f5aee9a7600"
	I0915 08:01:01.961048   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:01:01.961053   60028 cri.go:89] found id: "3c9827a99111eb117c07d0500de08596ffc9a9816af4bd12aa07805a588f80db"
	I0915 08:01:01.961055   60028 cri.go:89] found id: ""
	I0915 08:01:01.961118   60028 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-669362 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-15 08:13:13.585190921 +0000 UTC m=+6212.533464516
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-669362 -n kubernetes-upgrade-669362
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-669362 -n kubernetes-upgrade-669362: exit status 2 (238.853176ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-669362 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-669362 logs -n 25: (1.951466868s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-368115                              | old-k8s-version-368115    | jenkins | v1.34.0 | 15 Sep 24 07:57 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-315583 sudo                            | NoKubernetes-315583       | jenkins | v1.34.0 | 15 Sep 24 07:57 UTC |                     |
	|         | systemctl is-active --quiet                            |                           |         |         |                     |                     |
	|         | service kubelet                                        |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-315583                                 | NoKubernetes-315583       | jenkins | v1.34.0 | 15 Sep 24 07:57 UTC | 15 Sep 24 07:57 UTC |
	| delete  | -p running-upgrade-972764                              | running-upgrade-972764    | jenkins | v1.34.0 | 15 Sep 24 07:57 UTC | 15 Sep 24 07:57 UTC |
	| start   | -p NoKubernetes-315583                                 | NoKubernetes-315583       | jenkins | v1.34.0 | 15 Sep 24 07:57 UTC | 15 Sep 24 07:58 UTC |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p no-preload-778087                                   | no-preload-778087         | jenkins | v1.34.0 | 15 Sep 24 07:57 UTC | 15 Sep 24 07:59 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-669362                           | kubernetes-upgrade-669362 | jenkins | v1.34.0 | 15 Sep 24 07:57 UTC | 15 Sep 24 07:57 UTC |
	| start   | -p kubernetes-upgrade-669362                           | kubernetes-upgrade-669362 | jenkins | v1.34.0 | 15 Sep 24 07:57 UTC | 15 Sep 24 07:59 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-315583 sudo                            | NoKubernetes-315583       | jenkins | v1.34.0 | 15 Sep 24 07:58 UTC |                     |
	|         | systemctl is-active --quiet                            |                           |         |         |                     |                     |
	|         | service kubelet                                        |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-315583                                 | NoKubernetes-315583       | jenkins | v1.34.0 | 15 Sep 24 07:58 UTC | 15 Sep 24 07:58 UTC |
	| start   | -p embed-certs-474196                                  | embed-certs-474196        | jenkins | v1.34.0 | 15 Sep 24 07:58 UTC | 15 Sep 24 07:59 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-669362                           | kubernetes-upgrade-669362 | jenkins | v1.34.0 | 15 Sep 24 07:59 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-669362                           | kubernetes-upgrade-669362 | jenkins | v1.34.0 | 15 Sep 24 07:59 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-778087             | no-preload-778087         | jenkins | v1.34.0 | 15 Sep 24 07:59 UTC | 15 Sep 24 07:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-778087                                   | no-preload-778087         | jenkins | v1.34.0 | 15 Sep 24 07:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-474196            | embed-certs-474196        | jenkins | v1.34.0 | 15 Sep 24 07:59 UTC | 15 Sep 24 07:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p embed-certs-474196                                  | embed-certs-474196        | jenkins | v1.34.0 | 15 Sep 24 07:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-368115        | old-k8s-version-368115    | jenkins | v1.34.0 | 15 Sep 24 08:01 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-778087                  | no-preload-778087         | jenkins | v1.34.0 | 15 Sep 24 08:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-778087                                   | no-preload-778087         | jenkins | v1.34.0 | 15 Sep 24 08:02 UTC | 15 Sep 24 08:12 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-474196                 | embed-certs-474196        | jenkins | v1.34.0 | 15 Sep 24 08:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p embed-certs-474196                                  | embed-certs-474196        | jenkins | v1.34.0 | 15 Sep 24 08:02 UTC | 15 Sep 24 08:11 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-368115                              | old-k8s-version-368115    | jenkins | v1.34.0 | 15 Sep 24 08:03 UTC | 15 Sep 24 08:03 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-368115             | old-k8s-version-368115    | jenkins | v1.34.0 | 15 Sep 24 08:03 UTC | 15 Sep 24 08:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-368115                              | old-k8s-version-368115    | jenkins | v1.34.0 | 15 Sep 24 08:03 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 08:03:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 08:03:47.898075   61935 out.go:345] Setting OutFile to fd 1 ...
	I0915 08:03:47.898303   61935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 08:03:47.898312   61935 out.go:358] Setting ErrFile to fd 2...
	I0915 08:03:47.898316   61935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 08:03:47.898509   61935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 08:03:47.899025   61935 out.go:352] Setting JSON to false
	I0915 08:03:47.900056   61935 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6374,"bootTime":1726381054,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 08:03:47.900147   61935 start.go:139] virtualization: kvm guest
	I0915 08:03:47.902529   61935 out.go:177] * [old-k8s-version-368115] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 08:03:47.904171   61935 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 08:03:47.904170   61935 notify.go:220] Checking for updates...
	I0915 08:03:47.905857   61935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 08:03:47.907119   61935 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 08:03:47.908453   61935 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 08:03:47.909854   61935 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 08:03:47.911152   61935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 08:03:47.913011   61935 config.go:182] Loaded profile config "old-k8s-version-368115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0915 08:03:47.913410   61935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:03:47.913455   61935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:03:47.928469   61935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I0915 08:03:47.928947   61935 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:03:47.929484   61935 main.go:141] libmachine: Using API Version  1
	I0915 08:03:47.929506   61935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:03:47.929790   61935 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:03:47.929986   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .DriverName
	I0915 08:03:47.932101   61935 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0915 08:03:47.933250   61935 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 08:03:47.933569   61935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:03:47.933626   61935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:03:47.948500   61935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I0915 08:03:47.948943   61935 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:03:47.949489   61935 main.go:141] libmachine: Using API Version  1
	I0915 08:03:47.949510   61935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:03:47.949933   61935 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:03:47.950125   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .DriverName
	I0915 08:03:47.985970   61935 out.go:177] * Using the kvm2 driver based on existing profile
	I0915 08:03:47.987395   61935 start.go:297] selected driver: kvm2
	I0915 08:03:47.987405   61935 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-368115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-368115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 08:03:47.987520   61935 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 08:03:47.988231   61935 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 08:03:47.988296   61935 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 08:03:48.003402   61935 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 08:03:48.003781   61935 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 08:03:48.003817   61935 cni.go:84] Creating CNI manager for ""
	I0915 08:03:48.003855   61935 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 08:03:48.003911   61935 start.go:340] cluster config:
	{Name:old-k8s-version-368115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-368115 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 08:03:48.004012   61935 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 08:03:48.006916   61935 out.go:177] * Starting "old-k8s-version-368115" primary control-plane node in "old-k8s-version-368115" cluster
	I0915 08:03:48.643344   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:03:48.657512   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:03:48.657579   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:03:48.691351   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:03:48.691371   60028 cri.go:89] found id: ""
	I0915 08:03:48.691378   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:03:48.691424   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:48.695575   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:03:48.695628   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:03:48.731368   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:03:48.731392   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:03:48.731397   60028 cri.go:89] found id: ""
	I0915 08:03:48.731404   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:03:48.731454   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:48.735585   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:48.739463   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:03:48.739532   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:03:48.775094   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:03:48.775118   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:03:48.775122   60028 cri.go:89] found id: ""
	I0915 08:03:48.775129   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:03:48.775176   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:48.779331   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:48.783124   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:03:48.783183   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:03:48.819831   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:03:48.819853   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:03:48.819858   60028 cri.go:89] found id: ""
	I0915 08:03:48.819865   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:03:48.819910   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:48.824136   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:48.827921   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:03:48.827992   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:03:48.867650   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:03:48.867675   60028 cri.go:89] found id: ""
	I0915 08:03:48.867685   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:03:48.867741   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:48.871842   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:03:48.871889   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:03:48.905344   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:03:48.905367   60028 cri.go:89] found id: ""
	I0915 08:03:48.905376   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:03:48.905435   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:48.909666   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:03:48.909735   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:03:48.944812   60028 cri.go:89] found id: ""
	I0915 08:03:48.944838   60028 logs.go:276] 0 containers: []
	W0915 08:03:48.944849   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:03:48.944856   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:03:48.944917   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:03:48.980432   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:03:48.980453   60028 cri.go:89] found id: ""
	I0915 08:03:48.980460   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:03:48.980516   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:48.984505   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:03:48.984527   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:03:48.997450   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:03:48.997473   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:03:49.049994   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:03:49.050025   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:03:49.091923   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:03:49.091952   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:03:49.127642   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:03:49.127670   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:03:49.163958   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:03:49.163984   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:03:49.200741   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:03:49.200770   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:03:49.279935   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:03:49.279973   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:03:49.315679   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:03:49.315707   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:03:49.424741   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:03:49.424776   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:03:49.492136   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:03:49.492158   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:03:49.492187   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:03:49.533703   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:03:49.533732   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:03:49.569465   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:03:49.569493   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:03:49.923316   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:03:49.923353   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:03:49.964746   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:03:49.964782   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:03:52.506587   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:03:52.520776   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:03:52.520840   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:03:52.561322   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:03:52.561343   60028 cri.go:89] found id: ""
	I0915 08:03:52.561349   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:03:52.561393   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:52.565606   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:03:52.565663   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:03:49.585987   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:03:52.658011   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:03:48.008243   61935 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0915 08:03:48.008280   61935 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0915 08:03:48.008287   61935 cache.go:56] Caching tarball of preloaded images
	I0915 08:03:48.008355   61935 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 08:03:48.008366   61935 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0915 08:03:48.008457   61935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/old-k8s-version-368115/config.json ...
	I0915 08:03:48.008630   61935 start.go:360] acquireMachinesLock for old-k8s-version-368115: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 08:03:52.600181   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:03:52.600201   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:03:52.600205   60028 cri.go:89] found id: ""
	I0915 08:03:52.600211   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:03:52.600262   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:52.604266   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:52.608147   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:03:52.608200   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:03:52.647559   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:03:52.647579   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:03:52.647584   60028 cri.go:89] found id: ""
	I0915 08:03:52.647592   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:03:52.647643   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:52.651634   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:52.655669   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:03:52.655724   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:03:52.691734   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:03:52.691757   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:03:52.691762   60028 cri.go:89] found id: ""
	I0915 08:03:52.691771   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:03:52.691838   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:52.695906   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:52.699686   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:03:52.699738   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:03:52.738487   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:03:52.738512   60028 cri.go:89] found id: ""
	I0915 08:03:52.738522   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:03:52.738571   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:52.742690   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:03:52.742757   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:03:52.778381   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:03:52.778402   60028 cri.go:89] found id: ""
	I0915 08:03:52.778410   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:03:52.778459   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:52.782704   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:03:52.782763   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:03:52.817473   60028 cri.go:89] found id: ""
	I0915 08:03:52.817500   60028 logs.go:276] 0 containers: []
	W0915 08:03:52.817508   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:03:52.817514   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:03:52.817568   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:03:52.855233   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:03:52.855256   60028 cri.go:89] found id: ""
	I0915 08:03:52.855264   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:03:52.855319   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:52.859728   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:03:52.859753   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:03:52.895746   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:03:52.895776   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:03:52.932347   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:03:52.932382   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:03:53.294364   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:03:53.294411   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:03:53.337910   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:03:53.337943   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:03:53.378196   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:03:53.378229   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:03:53.448441   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:03:53.448484   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:03:53.483504   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:03:53.483531   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:03:53.525256   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:03:53.525291   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:03:53.561067   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:03:53.561094   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:03:53.605023   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:03:53.605048   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:03:53.676799   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:03:53.676823   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:03:53.676837   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:03:53.727689   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:03:53.727731   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:03:53.768227   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:03:53.768258   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:03:53.877822   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:03:53.877858   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:03:56.393031   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:03:56.406844   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:03:56.406905   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:03:56.440366   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:03:56.440385   60028 cri.go:89] found id: ""
	I0915 08:03:56.440392   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:03:56.440450   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:56.444523   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:03:56.444582   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:03:56.480439   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:03:56.480461   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:03:56.480464   60028 cri.go:89] found id: ""
	I0915 08:03:56.480471   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:03:56.480522   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:56.484716   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:56.489717   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:03:56.489779   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:03:56.524825   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:03:56.524845   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:03:56.524849   60028 cri.go:89] found id: ""
	I0915 08:03:56.524855   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:03:56.524900   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:56.528990   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:56.532790   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:03:56.532840   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:03:56.568445   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:03:56.568470   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:03:56.568477   60028 cri.go:89] found id: ""
	I0915 08:03:56.568485   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:03:56.568546   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:56.572512   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:56.576207   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:03:56.576269   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:03:56.614075   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:03:56.614103   60028 cri.go:89] found id: ""
	I0915 08:03:56.614110   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:03:56.614157   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:56.618294   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:03:56.618402   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:03:56.652322   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:03:56.652346   60028 cri.go:89] found id: ""
	I0915 08:03:56.652354   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:03:56.652403   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:56.656812   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:03:56.656870   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:03:56.694486   60028 cri.go:89] found id: ""
	I0915 08:03:56.694511   60028 logs.go:276] 0 containers: []
	W0915 08:03:56.694520   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:03:56.694526   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:03:56.694576   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:03:56.731218   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:03:56.731243   60028 cri.go:89] found id: ""
	I0915 08:03:56.731251   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:03:56.731298   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:03:56.735393   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:03:56.735412   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:03:56.805547   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:03:56.805580   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:03:56.843441   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:03:56.843466   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:03:56.914450   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:03:56.914471   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:03:56.914483   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:03:56.960926   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:03:56.960957   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:03:56.998722   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:03:56.998749   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:03:57.042620   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:03:57.042654   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:03:57.078024   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:03:57.078051   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:03:57.116576   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:03:57.116605   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:03:57.130347   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:03:57.130371   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:03:57.165569   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:03:57.165598   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:03:57.208435   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:03:57.208466   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:03:57.245010   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:03:57.245039   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:03:57.588542   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:03:57.588577   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:03:57.703665   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:03:57.703700   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:00.249301   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:00.263807   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:00.263883   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:00.304869   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:00.304894   60028 cri.go:89] found id: ""
	I0915 08:04:00.304902   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:00.304950   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:00.310596   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:00.310689   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:00.346622   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:00.346651   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:00.346657   60028 cri.go:89] found id: ""
	I0915 08:04:00.346666   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:00.346727   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:00.352048   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:00.356112   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:00.356199   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:00.398002   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:00.398024   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:00.398029   60028 cri.go:89] found id: ""
	I0915 08:04:00.398040   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:00.398101   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:00.402239   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:00.406275   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:00.406339   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:00.439713   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:00.439735   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:00.439741   60028 cri.go:89] found id: ""
	I0915 08:04:00.439750   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:00.439799   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:00.443886   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:00.447611   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:00.447654   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:00.482968   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:00.482999   60028 cri.go:89] found id: ""
	I0915 08:04:00.483009   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:00.483060   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:00.487269   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:00.487346   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:00.524126   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:00.524151   60028 cri.go:89] found id: ""
	I0915 08:04:00.524160   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:00.524222   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:00.528399   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:00.528476   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:00.566449   60028 cri.go:89] found id: ""
	I0915 08:04:00.566480   60028 logs.go:276] 0 containers: []
	W0915 08:04:00.566492   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:00.566499   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:00.566566   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:00.601279   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:00.601303   60028 cri.go:89] found id: ""
	I0915 08:04:00.601310   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:00.601360   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:00.605453   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:00.605475   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:00.719490   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:00.719531   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:00.789414   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:00.789439   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:00.789466   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:00.825046   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:00.825077   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:00.838799   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:00.838830   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:00.879554   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:00.879582   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:00.962264   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:00.962296   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:01.006036   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:01.006065   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:01.045202   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:01.045230   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:01.081140   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:01.081165   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:01.117579   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:01.117609   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:01.158104   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:01.158130   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:01.200107   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:01.200140   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:01.565405   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:01.565444   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:01.620989   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:01.621021   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:03:58.738055   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:04:01.810118   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:04:04.163783   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:04.177602   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:04.177687   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:04.213665   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:04.213692   60028 cri.go:89] found id: ""
	I0915 08:04:04.213702   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:04.213757   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:04.217934   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:04.217997   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:04.257599   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:04.257625   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:04.257632   60028 cri.go:89] found id: ""
	I0915 08:04:04.257640   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:04.257697   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:04.261975   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:04.266215   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:04.266284   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:04.302561   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:04.302587   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:04.302593   60028 cri.go:89] found id: ""
	I0915 08:04:04.302601   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:04.302660   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:04.308077   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:04.312053   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:04.312116   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:04.346641   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:04.346663   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:04.346667   60028 cri.go:89] found id: ""
	I0915 08:04:04.346676   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:04.346741   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:04.350941   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:04.354684   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:04.354751   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:04.392385   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:04.392413   60028 cri.go:89] found id: ""
	I0915 08:04:04.392423   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:04.392483   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:04.396556   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:04.396625   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:04.438833   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:04.438854   60028 cri.go:89] found id: ""
	I0915 08:04:04.438861   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:04.438912   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:04.443015   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:04.443078   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:04.478993   60028 cri.go:89] found id: ""
	I0915 08:04:04.479015   60028 logs.go:276] 0 containers: []
	W0915 08:04:04.479025   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:04.479032   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:04.479094   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:04.513987   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:04.514013   60028 cri.go:89] found id: ""
	I0915 08:04:04.514022   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:04.514079   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:04.518201   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:04.518222   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:04.552886   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:04.552911   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:04.587931   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:04.587963   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:04.624638   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:04.624673   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:04.707096   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:04.707135   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:04.759418   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:04.759450   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:04.799833   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:04.799869   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:04.841299   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:04.841327   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:04.877319   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:04.877349   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:05.223956   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:05.224002   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:05.329272   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:05.329307   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:05.343119   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:05.343148   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:05.380558   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:05.380585   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:05.421592   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:05.421621   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:05.490337   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:05.490370   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:05.490387   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:08.025930   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:08.040378   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:08.040448   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:08.077719   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:08.077747   60028 cri.go:89] found id: ""
	I0915 08:04:08.077757   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:08.077833   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:08.081838   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:08.081911   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:08.117362   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:08.117385   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:08.117391   60028 cri.go:89] found id: ""
	I0915 08:04:08.117399   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:08.117459   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:08.121644   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:08.125399   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:08.125461   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:08.161611   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:08.161637   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:08.161643   60028 cri.go:89] found id: ""
	I0915 08:04:08.161652   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:08.161712   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:08.166019   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:08.169663   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:08.169718   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:08.206757   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:08.206783   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:08.206788   60028 cri.go:89] found id: ""
	I0915 08:04:08.206797   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:08.206856   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:08.210828   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:08.214525   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:08.214572   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:08.250409   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:08.250438   60028 cri.go:89] found id: ""
	I0915 08:04:08.250445   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:08.250501   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:08.254519   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:08.254579   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:08.289902   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:08.289928   60028 cri.go:89] found id: ""
	I0915 08:04:08.289937   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:08.289994   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:08.293897   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:08.293955   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:08.338024   60028 cri.go:89] found id: ""
	I0915 08:04:08.338048   60028 logs.go:276] 0 containers: []
	W0915 08:04:08.338056   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:08.338062   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:08.338121   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:08.373817   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:08.373840   60028 cri.go:89] found id: ""
	I0915 08:04:08.373849   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:08.373906   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:08.377936   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:08.377955   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:08.432608   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:08.432637   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:08.473274   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:08.473307   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:08.509661   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:08.509686   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:08.590687   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:08.590721   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:08.625300   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:08.625327   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:08.673084   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:08.673110   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:08.789056   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:08.789094   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:08.826026   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:08.826061   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:08.862340   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:08.862371   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:08.934482   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:08.934509   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:08.934520   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:08.984542   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:08.984571   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:09.023331   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:09.023358   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:09.037389   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:09.037417   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:09.401961   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:09.401997   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:11.940235   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:11.954326   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:11.954407   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:11.990531   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:11.990562   60028 cri.go:89] found id: ""
	I0915 08:04:11.990571   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:11.990633   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:11.994789   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:11.994856   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:12.033999   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:12.034020   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:12.034024   60028 cri.go:89] found id: ""
	I0915 08:04:12.034030   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:12.034082   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:12.038518   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:12.042191   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:12.042257   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:12.079354   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:12.079379   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:12.079384   60028 cri.go:89] found id: ""
	I0915 08:04:12.079392   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:12.079468   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:12.083613   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:12.087172   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:12.087233   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:12.123264   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:12.123286   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:12.123289   60028 cri.go:89] found id: ""
	I0915 08:04:12.123297   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:12.123354   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:12.127253   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:12.130897   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:12.130945   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:12.164985   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:12.165006   60028 cri.go:89] found id: ""
	I0915 08:04:12.165013   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:12.165057   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:12.169078   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:12.169130   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:12.204046   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:12.204076   60028 cri.go:89] found id: ""
	I0915 08:04:12.204087   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:12.204147   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:12.208626   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:12.208698   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:12.245521   60028 cri.go:89] found id: ""
	I0915 08:04:12.245555   60028 logs.go:276] 0 containers: []
	W0915 08:04:12.245567   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:12.245573   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:12.245638   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:12.281348   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:12.281372   60028 cri.go:89] found id: ""
	I0915 08:04:12.281380   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:12.281431   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:12.285547   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:12.285585   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:12.339191   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:12.339230   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:12.373655   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:12.373683   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:12.416610   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:12.416638   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:12.459035   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:12.459063   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:12.494570   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:12.494603   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:12.531536   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:12.531570   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:07.890084   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:04:10.962154   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:04:12.646346   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:12.646384   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:12.713779   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:12.713819   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:12.713837   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:12.762053   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:12.762085   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:12.801074   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:12.801110   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:12.815396   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:12.815425   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:12.889080   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:12.889116   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:13.239713   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:13.239762   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:13.279574   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:13.279606   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:15.819712   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:15.833991   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:15.834052   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:15.871203   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:15.871235   60028 cri.go:89] found id: ""
	I0915 08:04:15.871245   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:15.871305   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:15.875849   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:15.875914   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:15.920474   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:15.920501   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:15.920505   60028 cri.go:89] found id: ""
	I0915 08:04:15.920511   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:15.920560   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:15.925069   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:15.929283   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:15.929346   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:15.969915   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:15.969944   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:15.969950   60028 cri.go:89] found id: ""
	I0915 08:04:15.969959   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:15.970013   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:15.974556   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:15.978683   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:15.978738   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:16.024092   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:16.024110   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:16.024113   60028 cri.go:89] found id: ""
	I0915 08:04:16.024120   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:16.024163   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:16.028820   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:16.032870   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:16.032927   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:16.073370   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:16.073390   60028 cri.go:89] found id: ""
	I0915 08:04:16.073396   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:16.073438   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:16.077662   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:16.077740   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:16.112843   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:16.112867   60028 cri.go:89] found id: ""
	I0915 08:04:16.112874   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:16.112922   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:16.117224   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:16.117327   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:16.152744   60028 cri.go:89] found id: ""
	I0915 08:04:16.152770   60028 logs.go:276] 0 containers: []
	W0915 08:04:16.152777   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:16.152783   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:16.152831   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:16.192366   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:16.192393   60028 cri.go:89] found id: ""
	I0915 08:04:16.192402   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:16.192466   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:16.196728   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:16.196748   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:16.313080   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:16.313120   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:16.357555   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:16.357585   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:16.371706   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:16.371737   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:16.415971   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:16.416001   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:16.458332   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:16.458361   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:16.492421   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:16.492453   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:16.530161   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:16.530188   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:16.571851   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:16.571879   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:16.641278   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:16.641305   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:16.641322   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:16.677773   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:16.677817   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:16.758843   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:16.758874   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:16.796281   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:16.796305   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:16.838337   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:16.838363   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:16.880590   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:16.880623   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:17.042025   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:04:19.750313   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:19.765372   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:19.765449   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:19.812971   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:19.813010   60028 cri.go:89] found id: ""
	I0915 08:04:19.813020   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:19.813076   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:19.817556   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:19.817622   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:19.854762   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:19.854784   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:19.854788   60028 cri.go:89] found id: ""
	I0915 08:04:19.854795   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:19.854844   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:19.859206   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:19.863218   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:19.863287   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:19.897053   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:19.897084   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:19.897090   60028 cri.go:89] found id: ""
	I0915 08:04:19.897099   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:19.897161   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:19.901362   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:19.905265   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:19.905313   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:19.943649   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:19.943671   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:19.943675   60028 cri.go:89] found id: ""
	I0915 08:04:19.943681   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:19.943728   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:19.947905   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:19.951755   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:19.951806   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:19.987446   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:19.987473   60028 cri.go:89] found id: ""
	I0915 08:04:19.987481   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:19.987528   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:19.991644   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:19.991702   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:20.027712   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:20.027737   60028 cri.go:89] found id: ""
	I0915 08:04:20.027744   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:20.027789   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:20.031843   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:20.031910   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:20.066525   60028 cri.go:89] found id: ""
	I0915 08:04:20.066553   60028 logs.go:276] 0 containers: []
	W0915 08:04:20.066561   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:20.066567   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:20.066619   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:20.100926   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:20.100950   60028 cri.go:89] found id: ""
	I0915 08:04:20.100958   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:20.101006   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:20.105234   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:20.105256   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:20.145447   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:20.145476   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:20.482259   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:20.482296   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:20.518716   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:20.518749   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:20.562417   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:20.562447   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:20.598760   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:20.598792   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:20.674194   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:20.674233   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:20.709904   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:20.709934   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:20.747861   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:20.747889   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:20.863467   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:20.863502   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:20.929998   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:20.930029   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:20.930045   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:20.981693   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:20.981727   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:21.030743   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:21.030768   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:21.044602   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:21.044625   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:21.081425   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:21.081451   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:20.114027   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:04:23.621212   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:23.636670   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:23.636738   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:23.672723   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:23.672750   60028 cri.go:89] found id: ""
	I0915 08:04:23.672758   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:23.672803   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:23.676806   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:23.676872   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:23.712692   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:23.712715   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:23.712721   60028 cri.go:89] found id: ""
	I0915 08:04:23.712729   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:23.712785   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:23.717053   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:23.720969   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:23.721026   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:23.763828   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:23.763859   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:23.763864   60028 cri.go:89] found id: ""
	I0915 08:04:23.763872   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:23.763926   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:23.768210   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:23.772152   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:23.772204   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:23.805692   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:23.805715   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:23.805720   60028 cri.go:89] found id: ""
	I0915 08:04:23.805728   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:23.805785   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:23.810157   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:23.814242   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:23.814297   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:23.848995   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:23.849020   60028 cri.go:89] found id: ""
	I0915 08:04:23.849030   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:23.849090   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:23.853102   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:23.853161   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:23.886596   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:23.886623   60028 cri.go:89] found id: ""
	I0915 08:04:23.886632   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:23.886698   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:23.890757   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:23.890823   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:23.924475   60028 cri.go:89] found id: ""
	I0915 08:04:23.924499   60028 logs.go:276] 0 containers: []
	W0915 08:04:23.924506   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:23.924512   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:23.924571   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:23.957734   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:23.957752   60028 cri.go:89] found id: ""
	I0915 08:04:23.957759   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:23.957829   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:23.961787   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:23.961816   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:24.075390   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:24.075427   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:24.112033   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:24.112065   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:24.146934   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:24.146961   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:24.485689   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:24.485739   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:24.500068   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:24.500099   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:24.570470   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:24.570499   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:24.570514   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:24.619640   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:24.619672   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:24.663541   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:24.663576   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:24.746664   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:24.746702   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:24.792763   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:24.792802   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:24.835648   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:24.835679   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:24.873149   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:24.873180   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:24.909447   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:24.909482   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:24.953874   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:24.953904   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:27.495259   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:27.509943   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:27.510015   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:27.544498   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:27.544523   60028 cri.go:89] found id: ""
	I0915 08:04:27.544533   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:27.544593   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:27.548695   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:27.548758   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:26.194047   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:04:27.583774   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:27.583795   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:27.583799   60028 cri.go:89] found id: ""
	I0915 08:04:27.583805   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:27.583860   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:27.588792   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:27.592598   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:27.592664   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:27.625928   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:27.625953   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:27.625959   60028 cri.go:89] found id: ""
	I0915 08:04:27.625967   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:27.626026   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:27.630449   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:27.634667   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:27.634736   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:27.671593   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:27.671615   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:27.671620   60028 cri.go:89] found id: ""
	I0915 08:04:27.671629   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:27.671681   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:27.676392   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:27.680137   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:27.680193   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:27.715196   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:27.715220   60028 cri.go:89] found id: ""
	I0915 08:04:27.715229   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:27.715278   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:27.719326   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:27.719394   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:27.754830   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:27.754855   60028 cri.go:89] found id: ""
	I0915 08:04:27.754863   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:27.754920   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:27.759052   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:27.759118   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:27.797923   60028 cri.go:89] found id: ""
	I0915 08:04:27.797948   60028 logs.go:276] 0 containers: []
	W0915 08:04:27.798008   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:27.798023   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:27.798086   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:27.832591   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:27.832612   60028 cri.go:89] found id: ""
	I0915 08:04:27.832619   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:27.832665   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:27.836705   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:27.836728   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:27.916624   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:27.916664   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:27.955788   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:27.955827   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:28.074267   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:28.074304   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:28.140428   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:28.140471   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:28.140486   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:28.194730   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:28.194755   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:28.233239   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:28.233275   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:28.275732   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:28.275768   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:28.313379   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:28.313411   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:28.347099   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:28.347126   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:28.675801   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:28.675842   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:28.692354   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:28.692389   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:28.728444   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:28.728475   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:28.772136   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:28.772167   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:28.814142   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:28.814169   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:31.351484   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:31.365157   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:31.365239   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:31.402508   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:31.402536   60028 cri.go:89] found id: ""
	I0915 08:04:31.402545   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:31.402602   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:31.406672   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:31.406744   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:31.440437   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:31.440466   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:31.440472   60028 cri.go:89] found id: ""
	I0915 08:04:31.440482   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:31.440529   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:31.444558   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:31.448125   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:31.448191   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:31.483757   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:31.483777   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:31.483780   60028 cri.go:89] found id: ""
	I0915 08:04:31.483786   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:31.483833   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:31.487932   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:31.491834   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:31.491891   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:31.536974   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:31.537003   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:31.537008   60028 cri.go:89] found id: ""
	I0915 08:04:31.537016   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:31.537068   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:31.541056   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:31.545156   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:31.545220   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:31.578849   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:31.578870   60028 cri.go:89] found id: ""
	I0915 08:04:31.578876   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:31.578928   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:31.582918   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:31.582975   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:31.616689   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:31.616710   60028 cri.go:89] found id: ""
	I0915 08:04:31.616718   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:31.616771   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:31.620854   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:31.620907   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:31.659260   60028 cri.go:89] found id: ""
	I0915 08:04:31.659283   60028 logs.go:276] 0 containers: []
	W0915 08:04:31.659291   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:31.659298   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:31.659351   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:31.696664   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:31.696694   60028 cri.go:89] found id: ""
	I0915 08:04:31.696701   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:31.696809   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:31.700817   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:31.700838   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:31.736280   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:31.736307   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:31.771612   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:31.771645   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:32.113656   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:32.113700   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:32.154957   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:32.154987   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:32.189684   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:32.189711   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:32.265232   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:32.265259   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:32.265275   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:32.313778   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:32.313827   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:32.353894   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:32.353923   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:32.388046   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:32.388072   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:32.436547   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:32.436575   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:32.550803   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:32.550890   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:32.564965   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:32.564993   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:29.266154   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:04:32.603257   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:32.603292   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:32.640132   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:32.640158   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:35.221397   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:35.235412   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:35.235501   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:35.270735   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:35.270758   60028 cri.go:89] found id: ""
	I0915 08:04:35.270766   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:35.270816   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:35.274744   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:35.274808   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:35.313160   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:35.313179   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:35.313183   60028 cri.go:89] found id: ""
	I0915 08:04:35.313189   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:35.313233   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:35.317343   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:35.320948   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:35.321003   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:35.359818   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:35.359841   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:35.359848   60028 cri.go:89] found id: ""
	I0915 08:04:35.359856   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:35.359920   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:35.364242   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:35.368219   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:35.368294   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:35.404632   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:35.404657   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:35.404663   60028 cri.go:89] found id: ""
	I0915 08:04:35.404671   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:35.404734   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:35.409780   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:35.413722   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:35.413802   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:35.448482   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:35.448509   60028 cri.go:89] found id: ""
	I0915 08:04:35.448519   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:35.448566   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:35.452525   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:35.452595   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:35.497087   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:35.497110   60028 cri.go:89] found id: ""
	I0915 08:04:35.497119   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:35.497179   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:35.501154   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:35.501220   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:35.539546   60028 cri.go:89] found id: ""
	I0915 08:04:35.539573   60028 logs.go:276] 0 containers: []
	W0915 08:04:35.539583   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:35.539590   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:35.539660   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:35.578636   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:35.578663   60028 cri.go:89] found id: ""
	I0915 08:04:35.578672   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:35.578735   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:35.583570   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:35.583596   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:35.630962   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:35.630995   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:35.673169   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:35.673195   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:36.011808   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:36.011843   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:36.025949   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:36.025982   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:36.062092   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:36.062121   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:36.103057   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:36.103085   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:36.144313   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:36.144347   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:36.188176   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:36.188205   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:36.226206   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:36.226238   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:36.264666   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:36.264696   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:36.331469   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:36.331497   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:36.331511   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:36.416459   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:36.416492   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:36.450035   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:36.450064   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:36.484870   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:36.484896   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:35.346058   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:04:39.107215   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:39.121371   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:39.121450   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:39.157665   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:39.157691   60028 cri.go:89] found id: ""
	I0915 08:04:39.157700   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:39.157749   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:39.161995   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:39.162065   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:39.200019   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:39.200057   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:39.200063   60028 cri.go:89] found id: ""
	I0915 08:04:39.200073   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:39.200152   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:39.204414   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:39.208306   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:39.208370   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:39.243314   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:39.243340   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:39.243346   60028 cri.go:89] found id: ""
	I0915 08:04:39.243365   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:39.243414   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:39.247782   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:39.251540   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:39.251604   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:39.286196   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:39.286222   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:39.286228   60028 cri.go:89] found id: ""
	I0915 08:04:39.286236   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:39.286286   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:39.290424   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:39.294188   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:39.294257   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:39.330316   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:39.330335   60028 cri.go:89] found id: ""
	I0915 08:04:39.330342   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:39.330392   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:39.334354   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:39.334413   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:39.370828   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:39.370848   60028 cri.go:89] found id: ""
	I0915 08:04:39.370855   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:39.370897   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:39.375070   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:39.375120   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:39.412660   60028 cri.go:89] found id: ""
	I0915 08:04:39.412685   60028 logs.go:276] 0 containers: []
	W0915 08:04:39.412693   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:39.412698   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:39.412746   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:39.447435   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:39.447457   60028 cri.go:89] found id: ""
	I0915 08:04:39.447464   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:39.447509   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:39.451479   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:39.451500   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:39.465233   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:39.465258   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:39.499702   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:39.499730   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:39.542389   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:39.542414   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:39.576582   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:39.576607   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:39.640994   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:39.641014   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:39.641024   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:39.718418   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:39.718462   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:39.752829   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:39.752861   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:39.793053   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:39.793087   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:39.842482   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:39.842514   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:39.877311   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:39.877342   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:40.231144   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:40.231187   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:40.353332   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:40.353366   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:40.396073   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:40.396103   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:40.439640   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:40.439668   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:38.418003   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:04:42.976718   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:42.990606   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:42.990679   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:43.026104   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:43.026133   60028 cri.go:89] found id: ""
	I0915 08:04:43.026143   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:43.026206   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:43.030203   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:43.030270   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:43.070029   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:43.070062   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:43.070068   60028 cri.go:89] found id: ""
	I0915 08:04:43.070076   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:43.070292   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:43.074458   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:43.078122   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:43.078175   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:43.114991   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:43.115011   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:43.115015   60028 cri.go:89] found id: ""
	I0915 08:04:43.115022   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:43.115074   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:43.119277   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:43.123238   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:43.123289   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:43.159443   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:43.159464   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:43.159468   60028 cri.go:89] found id: ""
	I0915 08:04:43.159474   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:43.159529   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:43.163416   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:43.167132   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:43.167196   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:43.205226   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:43.205248   60028 cri.go:89] found id: ""
	I0915 08:04:43.205257   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:43.205303   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:43.209442   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:43.209506   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:43.245460   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:43.245483   60028 cri.go:89] found id: ""
	I0915 08:04:43.245491   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:43.245552   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:43.249755   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:43.249833   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:43.285260   60028 cri.go:89] found id: ""
	I0915 08:04:43.285292   60028 logs.go:276] 0 containers: []
	W0915 08:04:43.285303   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:43.285310   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:43.285376   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:43.322430   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:43.322457   60028 cri.go:89] found id: ""
	I0915 08:04:43.322466   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:43.322526   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:43.326636   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:43.326658   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:43.371003   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:43.371033   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:43.407841   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:43.407874   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:43.443002   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:43.443031   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:43.500019   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:43.500053   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:43.541073   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:43.541112   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:43.620392   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:43.620424   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:43.993924   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:43.993979   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:44.062643   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:44.062674   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:44.062689   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:44.098658   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:44.098692   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:44.140590   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:44.140620   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:44.177416   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:44.177444   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:44.212292   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:44.212316   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:44.251721   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:44.251744   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:44.368502   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:44.368533   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:46.884113   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:46.897790   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:46.897880   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:46.934376   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:46.934410   60028 cri.go:89] found id: ""
	I0915 08:04:46.934421   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:46.934470   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:46.938551   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:46.938623   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:46.974006   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:46.974034   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:46.974040   60028 cri.go:89] found id: ""
	I0915 08:04:46.974048   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:46.974101   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:46.978417   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:46.982296   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:46.982366   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:47.018237   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:47.018262   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:47.018268   60028 cri.go:89] found id: ""
	I0915 08:04:47.018276   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:47.018322   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:47.022350   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:47.026170   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:47.026230   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:47.060957   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:47.060985   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:47.060992   60028 cri.go:89] found id: ""
	I0915 08:04:47.061000   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:47.061060   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:47.065148   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:47.068896   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:47.068948   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:47.102628   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:47.102658   60028 cri.go:89] found id: ""
	I0915 08:04:47.102668   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:47.102723   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:47.106745   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:47.106819   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:47.145452   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:47.145482   60028 cri.go:89] found id: ""
	I0915 08:04:47.145492   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:47.145543   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:47.149552   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:47.149622   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:47.186752   60028 cri.go:89] found id: ""
	I0915 08:04:47.186777   60028 logs.go:276] 0 containers: []
	W0915 08:04:47.186789   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:47.186796   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:47.186863   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:47.224865   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:47.224888   60028 cri.go:89] found id: ""
	I0915 08:04:47.224896   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:47.224948   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:47.229019   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:47.229046   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:47.266668   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:47.266694   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:47.301632   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:47.301664   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:47.429231   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:47.429268   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:47.476549   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:47.476589   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:47.515997   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:47.516026   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:44.498118   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:04:47.574039   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:04:47.593700   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:47.593731   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:47.628753   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:47.628783   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:47.664883   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:47.664928   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:48.022071   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:48.022111   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:48.036657   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:48.036695   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:48.075464   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:48.075496   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:48.117119   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:48.117148   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:48.185710   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:48.185735   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:48.185748   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:48.225261   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:48.225290   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:50.760951   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:50.774461   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:50.774519   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:50.818642   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:50.818662   60028 cri.go:89] found id: ""
	I0915 08:04:50.818669   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:50.818713   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:50.823074   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:50.823148   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:50.857532   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:50.857556   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:50.857560   60028 cri.go:89] found id: ""
	I0915 08:04:50.857567   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:50.857613   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:50.862012   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:50.865814   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:50.865873   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:50.901055   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:50.901086   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:50.901093   60028 cri.go:89] found id: ""
	I0915 08:04:50.901112   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:50.901171   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:50.905566   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:50.909439   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:50.909497   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:50.945799   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:50.945848   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:50.945854   60028 cri.go:89] found id: ""
	I0915 08:04:50.945864   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:50.945927   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:50.949972   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:50.953884   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:50.953936   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:51.001231   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:51.001262   60028 cri.go:89] found id: ""
	I0915 08:04:51.001273   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:51.001335   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:51.005615   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:51.005677   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:51.040616   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:51.040646   60028 cri.go:89] found id: ""
	I0915 08:04:51.040656   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:51.040720   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:51.044796   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:51.044854   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:51.079530   60028 cri.go:89] found id: ""
	I0915 08:04:51.079553   60028 logs.go:276] 0 containers: []
	W0915 08:04:51.079565   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:51.079572   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:51.079648   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:51.114857   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:51.114885   60028 cri.go:89] found id: ""
	I0915 08:04:51.114894   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:51.114939   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:51.118955   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:51.118980   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:51.158007   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:51.158046   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:51.490227   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:51.490273   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:51.533914   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:51.533945   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:51.548841   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:51.548869   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:51.600673   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:51.600701   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:51.644677   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:51.644720   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:51.682907   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:51.682933   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:51.720734   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:51.720763   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:51.798170   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:51.798203   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:51.838243   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:51.838273   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:51.955515   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:51.955548   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:52.025218   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:52.025239   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:52.025250   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:52.062870   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:52.062906   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:52.103623   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:52.103647   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:54.640874   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:54.654833   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:54.654909   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:54.689767   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:54.689788   60028 cri.go:89] found id: ""
	I0915 08:04:54.689795   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:54.689861   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:54.693827   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:54.693902   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:54.732163   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:54.732192   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:54.732197   60028 cri.go:89] found id: ""
	I0915 08:04:54.732204   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:54.732254   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:54.736257   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:54.740011   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:54.740057   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:54.775686   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:54.775708   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:54.775712   60028 cri.go:89] found id: ""
	I0915 08:04:54.775718   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:54.775765   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:54.780693   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:54.784517   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:54.784568   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:54.818386   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:54.818409   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:54.818412   60028 cri.go:89] found id: ""
	I0915 08:04:54.818421   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:54.818464   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:54.822627   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:54.826696   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:54.826759   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:54.864132   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:54.864152   60028 cri.go:89] found id: ""
	I0915 08:04:54.864160   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:54.864212   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:54.868145   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:54.868201   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:54.904305   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:54.904327   60028 cri.go:89] found id: ""
	I0915 08:04:54.904335   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:54.904384   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:54.908654   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:54.908720   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:54.941689   60028 cri.go:89] found id: ""
	I0915 08:04:54.941718   60028 logs.go:276] 0 containers: []
	W0915 08:04:54.941726   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:54.941732   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:54.941781   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:54.976543   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:54.976561   60028 cri.go:89] found id: ""
	I0915 08:04:54.976568   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:54.976615   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:54.980723   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:54.980744   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:54.994686   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:54.994709   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:55.045365   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:55.045397   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:55.089928   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:55.089957   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:55.124805   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:55.124829   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:55.242700   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:55.242734   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:55.319579   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:55.319617   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:55.391617   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:55.391648   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:55.391663   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:55.434562   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:55.434592   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:55.468989   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:55.469018   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:55.504127   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:55.504153   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:55.539859   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:55.539887   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:55.575557   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:55.575586   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:55.612459   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:55.612486   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:04:55.962968   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:55.963016   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:53.650054   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:04:56.722049   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:04:58.519797   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:04:58.533002   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:04:58.533073   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:04:58.566876   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:58.566900   60028 cri.go:89] found id: ""
	I0915 08:04:58.566909   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:04:58.566963   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:58.570947   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:04:58.571005   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:04:58.607000   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:58.607027   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:58.607034   60028 cri.go:89] found id: ""
	I0915 08:04:58.607042   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:04:58.607101   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:58.611352   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:58.615120   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:04:58.615166   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:04:58.657931   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:58.657954   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:58.657958   60028 cri.go:89] found id: ""
	I0915 08:04:58.657966   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:04:58.658011   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:58.662506   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:58.671145   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:04:58.671227   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:04:58.706981   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:58.707009   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:58.707014   60028 cri.go:89] found id: ""
	I0915 08:04:58.707023   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:04:58.707090   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:58.711314   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:58.715228   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:04:58.715285   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:04:58.750390   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:58.750414   60028 cri.go:89] found id: ""
	I0915 08:04:58.750422   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:04:58.750479   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:58.754409   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:04:58.754475   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:04:58.787909   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:58.787930   60028 cri.go:89] found id: ""
	I0915 08:04:58.787936   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:04:58.787990   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:58.791984   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:04:58.792053   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:04:58.831588   60028 cri.go:89] found id: ""
	I0915 08:04:58.831613   60028 logs.go:276] 0 containers: []
	W0915 08:04:58.831623   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:04:58.831629   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:04:58.831690   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:04:58.874667   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:58.874693   60028 cri.go:89] found id: ""
	I0915 08:04:58.874704   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:04:58.874758   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:04:58.879112   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:04:58.879143   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:04:58.928563   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:04:58.928594   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:04:58.972136   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:04:58.972172   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:04:59.014589   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:04:59.014619   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:04:59.050811   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:04:59.050838   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:04:59.085447   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:04:59.085514   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:04:59.120251   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:04:59.120280   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:04:59.135024   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:04:59.135051   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:04:59.261754   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:04:59.261789   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:04:59.329270   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:04:59.329290   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:04:59.329300   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:04:59.365284   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:04:59.365318   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:04:59.400875   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:04:59.400902   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:04:59.439309   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:04:59.439336   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:04:59.519913   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:04:59.519948   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:04:59.556952   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:04:59.556985   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:05:02.407236   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:05:02.425097   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:05:02.425178   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:05:02.461494   60028 cri.go:89] found id: "74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:05:02.461523   60028 cri.go:89] found id: ""
	I0915 08:05:02.461533   60028 logs.go:276] 1 containers: [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0]
	I0915 08:05:02.461590   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:05:02.465708   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:05:02.465767   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:05:02.502476   60028 cri.go:89] found id: "316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:05:02.502498   60028 cri.go:89] found id: "9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:05:02.502502   60028 cri.go:89] found id: ""
	I0915 08:05:02.502508   60028 logs.go:276] 2 containers: [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53]
	I0915 08:05:02.502557   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:05:02.506852   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:05:02.510828   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:05:02.510893   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:05:02.550789   60028 cri.go:89] found id: "8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:05:02.550815   60028 cri.go:89] found id: "018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:05:02.550821   60028 cri.go:89] found id: ""
	I0915 08:05:02.550831   60028 logs.go:276] 2 containers: [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df]
	I0915 08:05:02.550890   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:05:02.555355   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:05:02.559512   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:05:02.559573   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:05:02.806124   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:05:02.599200   60028 cri.go:89] found id: "12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:05:02.599226   60028 cri.go:89] found id: "d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:05:02.599230   60028 cri.go:89] found id: ""
	I0915 08:05:02.599237   60028 logs.go:276] 2 containers: [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8]
	I0915 08:05:02.599294   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:05:02.603559   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:05:02.607785   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:05:02.607861   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:05:02.648055   60028 cri.go:89] found id: "364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:05:02.648085   60028 cri.go:89] found id: ""
	I0915 08:05:02.648094   60028 logs.go:276] 1 containers: [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9]
	I0915 08:05:02.648160   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:05:02.653589   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:05:02.653651   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:05:02.688676   60028 cri.go:89] found id: "588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:05:02.688698   60028 cri.go:89] found id: ""
	I0915 08:05:02.688705   60028 logs.go:276] 1 containers: [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630]
	I0915 08:05:02.688752   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:05:02.692748   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:05:02.692805   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:05:02.728930   60028 cri.go:89] found id: ""
	I0915 08:05:02.728961   60028 logs.go:276] 0 containers: []
	W0915 08:05:02.728969   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:05:02.728975   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:05:02.729031   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:05:02.767493   60028 cri.go:89] found id: "789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:05:02.767514   60028 cri.go:89] found id: ""
	I0915 08:05:02.767521   60028 logs.go:276] 1 containers: [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529]
	I0915 08:05:02.767568   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:05:02.772083   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:05:02.772111   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:05:02.852119   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:05:02.852169   60028 logs.go:123] Gathering logs for etcd [9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53] ...
	I0915 08:05:02.852180   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e508cb5de320fa3ce73ab68120cf90f0c2c43e89513584d62ff063f608eeb53"
	I0915 08:05:02.891820   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:05:02.891848   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:05:03.229154   60028 logs.go:123] Gathering logs for etcd [316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2] ...
	I0915 08:05:03.229204   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316ac49656cb01c975b8fc18ef787c906e783c8e3765e1719b351d812e28c8f2"
	I0915 08:05:03.269369   60028 logs.go:123] Gathering logs for coredns [018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df] ...
	I0915 08:05:03.269402   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 018172f710b9aa395b3d83b864c8f410e8c76674e8f30d69ff53f6def482a0df"
	I0915 08:05:03.304799   60028 logs.go:123] Gathering logs for kube-scheduler [d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8] ...
	I0915 08:05:03.304828   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89765a7a2a4ec9282bb92a15d3735c5ebc7ff86684ce1598238ea38dda0bfe8"
	I0915 08:05:03.343322   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:05:03.343351   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:05:03.387370   60028 logs.go:123] Gathering logs for storage-provisioner [789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529] ...
	I0915 08:05:03.387399   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789747964afc2df02b5ea0b62f856e67e5d16852ba3cec371b0b75e3a6316529"
	I0915 08:05:03.423379   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:05:03.423407   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:05:03.547305   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:05:03.547342   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:05:03.561729   60028 logs.go:123] Gathering logs for kube-apiserver [74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0] ...
	I0915 08:05:03.561767   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74a1a6690c1c77abc533729e0276a8e793b6b5fb2d58291dba71ebc77067a4e0"
	I0915 08:05:03.614532   60028 logs.go:123] Gathering logs for coredns [8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4] ...
	I0915 08:05:03.614565   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d73aaf01f18f9d75c123df9395385e0d177e2abdd1732e23bf01ea3b93813c4"
	I0915 08:05:03.651931   60028 logs.go:123] Gathering logs for kube-proxy [364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9] ...
	I0915 08:05:03.651966   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 364904ff54c3160dad71ccf0284b0ab609fcd60b602c88cf8c7e12f2ca45aaa9"
	I0915 08:05:03.690248   60028 logs.go:123] Gathering logs for kube-scheduler [12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae] ...
	I0915 08:05:03.690288   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12c99a74c3007a2ad9075d3932dcc9d79598f33c8614057bc884a949ca984dae"
	I0915 08:05:03.779857   60028 logs.go:123] Gathering logs for kube-controller-manager [588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630] ...
	I0915 08:05:03.779902   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 588e43495d51f757ef5ab5a2e350211bfc12172c99af2baf638817c81a522630"
	I0915 08:05:06.322980   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:05:06.336810   60028 kubeadm.go:597] duration metric: took 4m4.311396581s to restartPrimaryControlPlane
	W0915 08:05:06.336883   60028 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0915 08:05:06.336907   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0915 08:05:07.556637   60028 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.219706705s)
	I0915 08:05:07.556717   60028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 08:05:07.572330   60028 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 08:05:07.582570   60028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 08:05:07.592476   60028 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 08:05:07.592499   60028 kubeadm.go:157] found existing configuration files:
	
	I0915 08:05:07.592601   60028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 08:05:07.601908   60028 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 08:05:07.601976   60028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 08:05:07.611355   60028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 08:05:07.620385   60028 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 08:05:07.620439   60028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 08:05:07.629985   60028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 08:05:07.639146   60028 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 08:05:07.639214   60028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 08:05:07.649104   60028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 08:05:07.658415   60028 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 08:05:07.658534   60028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 08:05:07.668115   60028 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 08:05:07.712686   60028 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 08:05:07.712784   60028 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 08:05:07.833098   60028 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 08:05:07.833257   60028 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 08:05:07.833368   60028 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 08:05:07.842403   60028 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 08:05:05.874139   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:05:07.844527   60028 out.go:235]   - Generating certificates and keys ...
	I0915 08:05:07.844627   60028 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 08:05:07.844723   60028 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 08:05:07.844831   60028 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0915 08:05:07.844920   60028 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0915 08:05:07.845016   60028 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0915 08:05:07.845090   60028 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0915 08:05:07.845182   60028 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0915 08:05:07.845275   60028 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0915 08:05:07.845387   60028 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0915 08:05:07.845502   60028 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0915 08:05:07.845559   60028 kubeadm.go:310] [certs] Using the existing "sa" key
	I0915 08:05:07.845636   60028 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 08:05:08.050206   60028 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 08:05:08.175094   60028 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 08:05:08.274100   60028 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 08:05:08.473385   60028 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 08:05:08.557552   60028 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 08:05:08.558194   60028 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 08:05:08.560648   60028 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 08:05:08.562301   60028 out.go:235]   - Booting up control plane ...
	I0915 08:05:08.562401   60028 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 08:05:08.563033   60028 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 08:05:08.564484   60028 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 08:05:08.582833   60028 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 08:05:08.588263   60028 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 08:05:08.588326   60028 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 08:05:08.715921   60028 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 08:05:08.716053   60028 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 08:05:09.217662   60028 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.776061ms
	I0915 08:05:09.217769   60028 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 08:05:11.954165   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:05:15.026046   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:05:21.106043   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:05:24.178082   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:05:30.258066   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:05:33.330058   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:05:39.410021   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:05:42.482049   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:05:48.562010   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:05:51.634099   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:05:57.714060   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:06:00.786103   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:06:06.866090   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:06:09.938129   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:06:16.018060   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:06:19.090119   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:06:25.170074   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:06:28.242082   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:06:34.322058   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:06:37.394107   61251 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.247:22: connect: no route to host
	I0915 08:06:40.397963   61464 start.go:364] duration metric: took 4m9.29028853s to acquireMachinesLock for "embed-certs-474196"
	I0915 08:06:40.398016   61464 start.go:96] Skipping create...Using existing machine configuration
	I0915 08:06:40.398023   61464 fix.go:54] fixHost starting: 
	I0915 08:06:40.398384   61464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:06:40.398429   61464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:06:40.414832   61464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44927
	I0915 08:06:40.415332   61464 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:06:40.415902   61464 main.go:141] libmachine: Using API Version  1
	I0915 08:06:40.415926   61464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:06:40.416271   61464 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:06:40.416480   61464 main.go:141] libmachine: (embed-certs-474196) Calling .DriverName
	I0915 08:06:40.416638   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetState
	I0915 08:06:40.418397   61464 fix.go:112] recreateIfNeeded on embed-certs-474196: state=Stopped err=<nil>
	I0915 08:06:40.418428   61464 main.go:141] libmachine: (embed-certs-474196) Calling .DriverName
	W0915 08:06:40.418581   61464 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 08:06:40.420409   61464 out.go:177] * Restarting existing kvm2 VM for "embed-certs-474196" ...
	I0915 08:06:40.421760   61464 main.go:141] libmachine: (embed-certs-474196) Calling .Start
	I0915 08:06:40.421974   61464 main.go:141] libmachine: (embed-certs-474196) Ensuring networks are active...
	I0915 08:06:40.422800   61464 main.go:141] libmachine: (embed-certs-474196) Ensuring network default is active
	I0915 08:06:40.423191   61464 main.go:141] libmachine: (embed-certs-474196) Ensuring network mk-embed-certs-474196 is active
	I0915 08:06:40.423542   61464 main.go:141] libmachine: (embed-certs-474196) Getting domain xml...
	I0915 08:06:40.424291   61464 main.go:141] libmachine: (embed-certs-474196) Creating domain...
	I0915 08:06:40.395734   61251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 08:06:40.395770   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetMachineName
	I0915 08:06:40.396070   61251 buildroot.go:166] provisioning hostname "no-preload-778087"
	I0915 08:06:40.396093   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetMachineName
	I0915 08:06:40.396252   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHHostname
	I0915 08:06:40.397836   61251 machine.go:96] duration metric: took 4m37.421997285s to provisionDockerMachine
	I0915 08:06:40.397880   61251 fix.go:56] duration metric: took 4m37.443267865s for fixHost
	I0915 08:06:40.397887   61251 start.go:83] releasing machines lock for "no-preload-778087", held for 4m37.443388398s
	W0915 08:06:40.397915   61251 start.go:714] error starting host: provision: host is not running
	W0915 08:06:40.398007   61251 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0915 08:06:40.398016   61251 start.go:729] Will try again in 5 seconds ...
	I0915 08:06:41.638248   61464 main.go:141] libmachine: (embed-certs-474196) Waiting to get IP...
	I0915 08:06:41.639110   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:41.639553   61464 main.go:141] libmachine: (embed-certs-474196) DBG | unable to find current IP address of domain embed-certs-474196 in network mk-embed-certs-474196
	I0915 08:06:41.639606   61464 main.go:141] libmachine: (embed-certs-474196) DBG | I0915 08:06:41.639539   62594 retry.go:31] will retry after 249.448848ms: waiting for machine to come up
	I0915 08:06:41.891116   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:41.891632   61464 main.go:141] libmachine: (embed-certs-474196) DBG | unable to find current IP address of domain embed-certs-474196 in network mk-embed-certs-474196
	I0915 08:06:41.891659   61464 main.go:141] libmachine: (embed-certs-474196) DBG | I0915 08:06:41.891581   62594 retry.go:31] will retry after 362.237961ms: waiting for machine to come up
	I0915 08:06:42.255371   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:42.255893   61464 main.go:141] libmachine: (embed-certs-474196) DBG | unable to find current IP address of domain embed-certs-474196 in network mk-embed-certs-474196
	I0915 08:06:42.255920   61464 main.go:141] libmachine: (embed-certs-474196) DBG | I0915 08:06:42.255843   62594 retry.go:31] will retry after 430.4758ms: waiting for machine to come up
	I0915 08:06:42.687473   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:42.688017   61464 main.go:141] libmachine: (embed-certs-474196) DBG | unable to find current IP address of domain embed-certs-474196 in network mk-embed-certs-474196
	I0915 08:06:42.688038   61464 main.go:141] libmachine: (embed-certs-474196) DBG | I0915 08:06:42.687979   62594 retry.go:31] will retry after 464.944511ms: waiting for machine to come up
	I0915 08:06:43.154819   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:43.155269   61464 main.go:141] libmachine: (embed-certs-474196) DBG | unable to find current IP address of domain embed-certs-474196 in network mk-embed-certs-474196
	I0915 08:06:43.155292   61464 main.go:141] libmachine: (embed-certs-474196) DBG | I0915 08:06:43.155217   62594 retry.go:31] will retry after 648.058536ms: waiting for machine to come up
	I0915 08:06:43.805142   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:43.805535   61464 main.go:141] libmachine: (embed-certs-474196) DBG | unable to find current IP address of domain embed-certs-474196 in network mk-embed-certs-474196
	I0915 08:06:43.805553   61464 main.go:141] libmachine: (embed-certs-474196) DBG | I0915 08:06:43.805504   62594 retry.go:31] will retry after 782.000025ms: waiting for machine to come up
	I0915 08:06:44.589646   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:44.590125   61464 main.go:141] libmachine: (embed-certs-474196) DBG | unable to find current IP address of domain embed-certs-474196 in network mk-embed-certs-474196
	I0915 08:06:44.590150   61464 main.go:141] libmachine: (embed-certs-474196) DBG | I0915 08:06:44.590079   62594 retry.go:31] will retry after 1.043786089s: waiting for machine to come up
	I0915 08:06:45.635802   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:45.636270   61464 main.go:141] libmachine: (embed-certs-474196) DBG | unable to find current IP address of domain embed-certs-474196 in network mk-embed-certs-474196
	I0915 08:06:45.636291   61464 main.go:141] libmachine: (embed-certs-474196) DBG | I0915 08:06:45.636223   62594 retry.go:31] will retry after 1.315895073s: waiting for machine to come up
	I0915 08:06:45.399682   61251 start.go:360] acquireMachinesLock for no-preload-778087: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 08:06:46.953604   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:46.954133   61464 main.go:141] libmachine: (embed-certs-474196) DBG | unable to find current IP address of domain embed-certs-474196 in network mk-embed-certs-474196
	I0915 08:06:46.954166   61464 main.go:141] libmachine: (embed-certs-474196) DBG | I0915 08:06:46.954090   62594 retry.go:31] will retry after 1.239652071s: waiting for machine to come up
	I0915 08:06:48.194923   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:48.195431   61464 main.go:141] libmachine: (embed-certs-474196) DBG | unable to find current IP address of domain embed-certs-474196 in network mk-embed-certs-474196
	I0915 08:06:48.195465   61464 main.go:141] libmachine: (embed-certs-474196) DBG | I0915 08:06:48.195341   62594 retry.go:31] will retry after 1.482142882s: waiting for machine to come up
	I0915 08:06:49.680017   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:49.680612   61464 main.go:141] libmachine: (embed-certs-474196) DBG | unable to find current IP address of domain embed-certs-474196 in network mk-embed-certs-474196
	I0915 08:06:49.680638   61464 main.go:141] libmachine: (embed-certs-474196) DBG | I0915 08:06:49.680567   62594 retry.go:31] will retry after 1.847844342s: waiting for machine to come up
	I0915 08:06:51.529752   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:51.530227   61464 main.go:141] libmachine: (embed-certs-474196) DBG | unable to find current IP address of domain embed-certs-474196 in network mk-embed-certs-474196
	I0915 08:06:51.530256   61464 main.go:141] libmachine: (embed-certs-474196) DBG | I0915 08:06:51.530166   62594 retry.go:31] will retry after 2.753554415s: waiting for machine to come up
	I0915 08:06:54.286977   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:54.287320   61464 main.go:141] libmachine: (embed-certs-474196) DBG | unable to find current IP address of domain embed-certs-474196 in network mk-embed-certs-474196
	I0915 08:06:54.287347   61464 main.go:141] libmachine: (embed-certs-474196) DBG | I0915 08:06:54.287275   62594 retry.go:31] will retry after 3.668235895s: waiting for machine to come up
	I0915 08:06:59.278622   61935 start.go:364] duration metric: took 3m11.26996187s to acquireMachinesLock for "old-k8s-version-368115"
	I0915 08:06:59.278686   61935 start.go:96] Skipping create...Using existing machine configuration
	I0915 08:06:59.278697   61935 fix.go:54] fixHost starting: 
	I0915 08:06:59.279080   61935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:06:59.279134   61935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:06:59.296194   61935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I0915 08:06:59.296678   61935 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:06:59.297138   61935 main.go:141] libmachine: Using API Version  1
	I0915 08:06:59.297161   61935 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:06:59.297508   61935 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:06:59.297665   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .DriverName
	I0915 08:06:59.297821   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetState
	I0915 08:06:59.299228   61935 fix.go:112] recreateIfNeeded on old-k8s-version-368115: state=Stopped err=<nil>
	I0915 08:06:59.299249   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .DriverName
	W0915 08:06:59.299386   61935 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 08:06:59.301649   61935 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-368115" ...
	I0915 08:06:57.958380   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:57.958926   61464 main.go:141] libmachine: (embed-certs-474196) Found IP for machine: 192.168.39.225
	I0915 08:06:57.958947   61464 main.go:141] libmachine: (embed-certs-474196) Reserving static IP address...
	I0915 08:06:57.958961   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has current primary IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:57.959406   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "embed-certs-474196", mac: "52:54:00:d3:f3:e9", ip: "192.168.39.225"} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:57.959430   61464 main.go:141] libmachine: (embed-certs-474196) DBG | skip adding static IP to network mk-embed-certs-474196 - found existing host DHCP lease matching {name: "embed-certs-474196", mac: "52:54:00:d3:f3:e9", ip: "192.168.39.225"}
	I0915 08:06:57.959439   61464 main.go:141] libmachine: (embed-certs-474196) Reserved static IP address: 192.168.39.225
	I0915 08:06:57.959452   61464 main.go:141] libmachine: (embed-certs-474196) Waiting for SSH to be available...
	I0915 08:06:57.959461   61464 main.go:141] libmachine: (embed-certs-474196) DBG | Getting to WaitForSSH function...
	I0915 08:06:57.961639   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:57.961951   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:57.961973   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:57.962106   61464 main.go:141] libmachine: (embed-certs-474196) DBG | Using SSH client type: external
	I0915 08:06:57.962127   61464 main.go:141] libmachine: (embed-certs-474196) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/embed-certs-474196/id_rsa (-rw-------)
	I0915 08:06:57.962155   61464 main.go:141] libmachine: (embed-certs-474196) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/embed-certs-474196/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 08:06:57.962169   61464 main.go:141] libmachine: (embed-certs-474196) DBG | About to run SSH command:
	I0915 08:06:57.962184   61464 main.go:141] libmachine: (embed-certs-474196) DBG | exit 0
	I0915 08:06:58.089838   61464 main.go:141] libmachine: (embed-certs-474196) DBG | SSH cmd err, output: <nil>: 
	I0915 08:06:58.090235   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetConfigRaw
	I0915 08:06:58.090845   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetIP
	I0915 08:06:58.093351   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.093711   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:58.093740   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.094055   61464 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/embed-certs-474196/config.json ...
	I0915 08:06:58.094291   61464 machine.go:93] provisionDockerMachine start ...
	I0915 08:06:58.094313   61464 main.go:141] libmachine: (embed-certs-474196) Calling .DriverName
	I0915 08:06:58.094545   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHHostname
	I0915 08:06:58.096947   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.097340   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:58.097370   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.097556   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHPort
	I0915 08:06:58.097788   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:06:58.098028   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:06:58.098169   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHUsername
	I0915 08:06:58.098335   61464 main.go:141] libmachine: Using SSH client type: native
	I0915 08:06:58.098532   61464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0915 08:06:58.098542   61464 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 08:06:58.206219   61464 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0915 08:06:58.206242   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetMachineName
	I0915 08:06:58.206444   61464 buildroot.go:166] provisioning hostname "embed-certs-474196"
	I0915 08:06:58.206467   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetMachineName
	I0915 08:06:58.206668   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHHostname
	I0915 08:06:58.209093   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.209408   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:58.209438   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.209620   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHPort
	I0915 08:06:58.209801   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:06:58.209962   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:06:58.210087   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHUsername
	I0915 08:06:58.210237   61464 main.go:141] libmachine: Using SSH client type: native
	I0915 08:06:58.210449   61464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0915 08:06:58.210465   61464 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-474196 && echo "embed-certs-474196" | sudo tee /etc/hostname
	I0915 08:06:58.331511   61464 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-474196
	
	I0915 08:06:58.331533   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHHostname
	I0915 08:06:58.334135   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.334509   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:58.334534   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.334690   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHPort
	I0915 08:06:58.334849   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:06:58.335012   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:06:58.335165   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHUsername
	I0915 08:06:58.335350   61464 main.go:141] libmachine: Using SSH client type: native
	I0915 08:06:58.335538   61464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0915 08:06:58.335557   61464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-474196' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-474196/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-474196' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 08:06:58.450485   61464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 08:06:58.450510   61464 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 08:06:58.450526   61464 buildroot.go:174] setting up certificates
	I0915 08:06:58.450533   61464 provision.go:84] configureAuth start
	I0915 08:06:58.450541   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetMachineName
	I0915 08:06:58.450848   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetIP
	I0915 08:06:58.453508   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.453856   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:58.453884   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.454038   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHHostname
	I0915 08:06:58.456039   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.456324   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:58.456346   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.456455   61464 provision.go:143] copyHostCerts
	I0915 08:06:58.456512   61464 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 08:06:58.456525   61464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 08:06:58.456604   61464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 08:06:58.456714   61464 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 08:06:58.456726   61464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 08:06:58.456765   61464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 08:06:58.456844   61464 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 08:06:58.456855   61464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 08:06:58.456890   61464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 08:06:58.456973   61464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.embed-certs-474196 san=[127.0.0.1 192.168.39.225 embed-certs-474196 localhost minikube]
	I0915 08:06:58.641458   61464 provision.go:177] copyRemoteCerts
	I0915 08:06:58.641518   61464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 08:06:58.641544   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHHostname
	I0915 08:06:58.644149   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.644498   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:58.644524   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.644743   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHPort
	I0915 08:06:58.644923   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:06:58.645052   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHUsername
	I0915 08:06:58.645181   61464 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/embed-certs-474196/id_rsa Username:docker}
	I0915 08:06:58.727923   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 08:06:58.751722   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0915 08:06:58.775286   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 08:06:58.798000   61464 provision.go:87] duration metric: took 347.456976ms to configureAuth
	I0915 08:06:58.798025   61464 buildroot.go:189] setting minikube options for container-runtime
	I0915 08:06:58.798187   61464 config.go:182] Loaded profile config "embed-certs-474196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 08:06:58.798256   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHHostname
	I0915 08:06:58.800703   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.801085   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:58.801121   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:58.801316   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHPort
	I0915 08:06:58.801498   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:06:58.801674   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:06:58.801803   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHUsername
	I0915 08:06:58.801964   61464 main.go:141] libmachine: Using SSH client type: native
	I0915 08:06:58.802155   61464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0915 08:06:58.802174   61464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 08:06:59.034498   61464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 08:06:59.034523   61464 machine.go:96] duration metric: took 940.219254ms to provisionDockerMachine
	I0915 08:06:59.034536   61464 start.go:293] postStartSetup for "embed-certs-474196" (driver="kvm2")
	I0915 08:06:59.034550   61464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 08:06:59.034568   61464 main.go:141] libmachine: (embed-certs-474196) Calling .DriverName
	I0915 08:06:59.034893   61464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 08:06:59.034930   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHHostname
	I0915 08:06:59.037787   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:59.038096   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:59.038117   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:59.038372   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHPort
	I0915 08:06:59.038538   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:06:59.038682   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHUsername
	I0915 08:06:59.038860   61464 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/embed-certs-474196/id_rsa Username:docker}
	I0915 08:06:59.124661   61464 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 08:06:59.129186   61464 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 08:06:59.129219   61464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 08:06:59.129297   61464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 08:06:59.129378   61464 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 08:06:59.129465   61464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 08:06:59.139077   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 08:06:59.163858   61464 start.go:296] duration metric: took 129.307268ms for postStartSetup
	I0915 08:06:59.163903   61464 fix.go:56] duration metric: took 18.765879797s for fixHost
	I0915 08:06:59.163928   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHHostname
	I0915 08:06:59.166915   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:59.167272   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:59.167292   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:59.167532   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHPort
	I0915 08:06:59.167693   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:06:59.167847   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:06:59.168016   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHUsername
	I0915 08:06:59.168170   61464 main.go:141] libmachine: Using SSH client type: native
	I0915 08:06:59.168342   61464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0915 08:06:59.168353   61464 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 08:06:59.278480   61464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726387619.252957031
	
	I0915 08:06:59.278501   61464 fix.go:216] guest clock: 1726387619.252957031
	I0915 08:06:59.278509   61464 fix.go:229] Guest: 2024-09-15 08:06:59.252957031 +0000 UTC Remote: 2024-09-15 08:06:59.163908861 +0000 UTC m=+268.195387133 (delta=89.04817ms)
	I0915 08:06:59.278532   61464 fix.go:200] guest clock delta is within tolerance: 89.04817ms
	I0915 08:06:59.278539   61464 start.go:83] releasing machines lock for "embed-certs-474196", held for 18.880539846s
	I0915 08:06:59.278568   61464 main.go:141] libmachine: (embed-certs-474196) Calling .DriverName
	I0915 08:06:59.278841   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetIP
	I0915 08:06:59.281460   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:59.281756   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:59.281776   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:59.281981   61464 main.go:141] libmachine: (embed-certs-474196) Calling .DriverName
	I0915 08:06:59.282517   61464 main.go:141] libmachine: (embed-certs-474196) Calling .DriverName
	I0915 08:06:59.282682   61464 main.go:141] libmachine: (embed-certs-474196) Calling .DriverName
	I0915 08:06:59.282802   61464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 08:06:59.282844   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHHostname
	I0915 08:06:59.282886   61464 ssh_runner.go:195] Run: cat /version.json
	I0915 08:06:59.282908   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHHostname
	I0915 08:06:59.285494   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:59.285520   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:59.285935   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:59.285969   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:06:59.285992   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:59.286076   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:06:59.286122   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHPort
	I0915 08:06:59.286269   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:06:59.286333   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHPort
	I0915 08:06:59.286488   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:06:59.286536   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHUsername
	I0915 08:06:59.286612   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHUsername
	I0915 08:06:59.286625   61464 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/embed-certs-474196/id_rsa Username:docker}
	I0915 08:06:59.286738   61464 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/embed-certs-474196/id_rsa Username:docker}
	I0915 08:06:59.394636   61464 ssh_runner.go:195] Run: systemctl --version
	I0915 08:06:59.400613   61464 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 08:06:59.546838   61464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 08:06:59.554218   61464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 08:06:59.554283   61464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 08:06:59.577063   61464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 08:06:59.577087   61464 start.go:495] detecting cgroup driver to use...
	I0915 08:06:59.577146   61464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 08:06:59.594797   61464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 08:06:59.609384   61464 docker.go:217] disabling cri-docker service (if available) ...
	I0915 08:06:59.609446   61464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 08:06:59.623233   61464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 08:06:59.637899   61464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 08:06:59.756311   61464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 08:06:59.889293   61464 docker.go:233] disabling docker service ...
	I0915 08:06:59.889376   61464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 08:06:59.904185   61464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 08:06:59.917854   61464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 08:07:00.061286   61464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 08:07:00.200729   61464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 08:07:00.215264   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 08:07:00.233942   61464 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 08:07:00.234003   61464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:00.246070   61464 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 08:07:00.246124   61464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:00.258686   61464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:00.270806   61464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:00.282664   61464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 08:07:00.294497   61464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:00.305922   61464 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:00.325368   61464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:00.338253   61464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 08:07:00.349887   61464 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0915 08:07:00.349946   61464 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0915 08:07:00.366435   61464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 08:07:00.377381   61464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 08:07:00.523208   61464 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 08:07:00.621354   61464 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 08:07:00.621428   61464 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 08:07:00.626304   61464 start.go:563] Will wait 60s for crictl version
	I0915 08:07:00.626361   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:07:00.630274   61464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 08:07:00.673176   61464 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 08:07:00.673269   61464 ssh_runner.go:195] Run: crio --version
	I0915 08:07:00.702303   61464 ssh_runner.go:195] Run: crio --version
	I0915 08:07:00.734411   61464 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 08:07:00.735560   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetIP
	I0915 08:07:00.738622   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:07:00.738994   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:07:00.739038   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:07:00.739207   61464 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0915 08:07:00.743734   61464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 08:07:00.756858   61464 kubeadm.go:883] updating cluster {Name:embed-certs-474196 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-474196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 08:07:00.756966   61464 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 08:07:00.757009   61464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 08:07:00.792598   61464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0915 08:07:00.792677   61464 ssh_runner.go:195] Run: which lz4
	I0915 08:07:00.797643   61464 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0915 08:07:00.802997   61464 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0915 08:07:00.803022   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0915 08:06:59.302972   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .Start
	I0915 08:06:59.303142   61935 main.go:141] libmachine: (old-k8s-version-368115) Ensuring networks are active...
	I0915 08:06:59.303877   61935 main.go:141] libmachine: (old-k8s-version-368115) Ensuring network default is active
	I0915 08:06:59.304443   61935 main.go:141] libmachine: (old-k8s-version-368115) Ensuring network mk-old-k8s-version-368115 is active
	I0915 08:06:59.304953   61935 main.go:141] libmachine: (old-k8s-version-368115) Getting domain xml...
	I0915 08:06:59.305925   61935 main.go:141] libmachine: (old-k8s-version-368115) Creating domain...
	I0915 08:07:00.579954   61935 main.go:141] libmachine: (old-k8s-version-368115) Waiting to get IP...
	I0915 08:07:00.580808   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:00.581231   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | unable to find current IP address of domain old-k8s-version-368115 in network mk-old-k8s-version-368115
	I0915 08:07:00.581321   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | I0915 08:07:00.581220   62711 retry.go:31] will retry after 281.805985ms: waiting for machine to come up
	I0915 08:07:00.864727   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:00.865307   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | unable to find current IP address of domain old-k8s-version-368115 in network mk-old-k8s-version-368115
	I0915 08:07:00.865338   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | I0915 08:07:00.865243   62711 retry.go:31] will retry after 376.239086ms: waiting for machine to come up
	I0915 08:07:01.243692   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:01.244252   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | unable to find current IP address of domain old-k8s-version-368115 in network mk-old-k8s-version-368115
	I0915 08:07:01.244280   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | I0915 08:07:01.244205   62711 retry.go:31] will retry after 420.954124ms: waiting for machine to come up
	I0915 08:07:01.666970   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:01.667618   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | unable to find current IP address of domain old-k8s-version-368115 in network mk-old-k8s-version-368115
	I0915 08:07:01.667647   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | I0915 08:07:01.667578   62711 retry.go:31] will retry after 569.803195ms: waiting for machine to come up
	I0915 08:07:02.239561   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:02.240111   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | unable to find current IP address of domain old-k8s-version-368115 in network mk-old-k8s-version-368115
	I0915 08:07:02.240138   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | I0915 08:07:02.240063   62711 retry.go:31] will retry after 664.667131ms: waiting for machine to come up
	I0915 08:07:02.226825   61464 crio.go:462] duration metric: took 1.429202365s to copy over tarball
	I0915 08:07:02.226956   61464 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0915 08:07:04.305126   61464 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.078132748s)
	I0915 08:07:04.305156   61464 crio.go:469] duration metric: took 2.078291313s to extract the tarball
	I0915 08:07:04.305165   61464 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0915 08:07:04.343576   61464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 08:07:04.384542   61464 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 08:07:04.384562   61464 cache_images.go:84] Images are preloaded, skipping loading
	I0915 08:07:04.384571   61464 kubeadm.go:934] updating node { 192.168.39.225 8443 v1.31.1 crio true true} ...
	I0915 08:07:04.384687   61464 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-474196 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-474196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 08:07:04.384769   61464 ssh_runner.go:195] Run: crio config
	I0915 08:07:04.430201   61464 cni.go:84] Creating CNI manager for ""
	I0915 08:07:04.430223   61464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 08:07:04.430236   61464 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 08:07:04.430260   61464 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.225 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-474196 NodeName:embed-certs-474196 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 08:07:04.430395   61464 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-474196"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 08:07:04.430457   61464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 08:07:04.440680   61464 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 08:07:04.440735   61464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 08:07:04.453881   61464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0915 08:07:04.473299   61464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 08:07:04.489604   61464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0915 08:07:04.506497   61464 ssh_runner.go:195] Run: grep 192.168.39.225	control-plane.minikube.internal$ /etc/hosts
	I0915 08:07:04.510322   61464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 08:07:04.523000   61464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 08:07:04.661284   61464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 08:07:04.679440   61464 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/embed-certs-474196 for IP: 192.168.39.225
	I0915 08:07:04.679483   61464 certs.go:194] generating shared ca certs ...
	I0915 08:07:04.679503   61464 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 08:07:04.679698   61464 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 08:07:04.679755   61464 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 08:07:04.679767   61464 certs.go:256] generating profile certs ...
	I0915 08:07:04.679887   61464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/embed-certs-474196/client.key
	I0915 08:07:04.679964   61464 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/embed-certs-474196/apiserver.key.caecef90
	I0915 08:07:04.680027   61464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/embed-certs-474196/proxy-client.key
	I0915 08:07:04.680210   61464 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 08:07:04.680255   61464 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 08:07:04.680267   61464 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 08:07:04.680296   61464 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 08:07:04.680330   61464 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 08:07:04.680370   61464 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 08:07:04.680418   61464 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 08:07:04.681152   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 08:07:04.729228   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 08:07:04.770172   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 08:07:04.810845   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 08:07:04.840324   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/embed-certs-474196/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0915 08:07:04.868871   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/embed-certs-474196/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 08:07:04.893273   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/embed-certs-474196/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 08:07:04.917134   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/embed-certs-474196/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 08:07:04.941773   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 08:07:04.966265   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 08:07:04.989109   61464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 08:07:05.012359   61464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 08:07:05.029120   61464 ssh_runner.go:195] Run: openssl version
	I0915 08:07:05.035033   61464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 08:07:05.047774   61464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 08:07:05.052259   61464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 08:07:05.052328   61464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 08:07:05.058293   61464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 08:07:05.069436   61464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 08:07:05.081691   61464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 08:07:05.086488   61464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 08:07:05.086544   61464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 08:07:05.092325   61464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 08:07:05.104770   61464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 08:07:05.116142   61464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 08:07:05.120748   61464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 08:07:05.120813   61464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 08:07:05.126838   61464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 08:07:05.139388   61464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 08:07:05.143992   61464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 08:07:05.149970   61464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 08:07:05.156326   61464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 08:07:05.162714   61464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 08:07:05.168845   61464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 08:07:05.175358   61464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 08:07:05.181438   61464 kubeadm.go:392] StartCluster: {Name:embed-certs-474196 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-474196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 08:07:05.181533   61464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 08:07:05.181605   61464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 08:07:05.223085   61464 cri.go:89] found id: ""
	I0915 08:07:05.223161   61464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 08:07:05.233727   61464 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0915 08:07:05.233750   61464 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0915 08:07:05.233798   61464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0915 08:07:05.243790   61464 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 08:07:05.244796   61464 kubeconfig.go:125] found "embed-certs-474196" server: "https://192.168.39.225:8443"
	I0915 08:07:05.246748   61464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 08:07:05.256745   61464 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.225
	I0915 08:07:05.256780   61464 kubeadm.go:1160] stopping kube-system containers ...
	I0915 08:07:05.256794   61464 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0915 08:07:05.256850   61464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 08:07:05.297440   61464 cri.go:89] found id: ""
	I0915 08:07:05.297540   61464 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0915 08:07:05.314894   61464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 08:07:05.324864   61464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 08:07:05.324886   61464 kubeadm.go:157] found existing configuration files:
	
	I0915 08:07:05.324938   61464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 08:07:05.334453   61464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 08:07:05.334514   61464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 08:07:05.344470   61464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 08:07:05.354598   61464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 08:07:05.354652   61464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 08:07:05.365258   61464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 08:07:05.375426   61464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 08:07:05.375489   61464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 08:07:05.385819   61464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 08:07:05.395699   61464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 08:07:05.395763   61464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 08:07:05.407401   61464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 08:07:05.417429   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:05.532846   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:02.906033   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:02.906590   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | unable to find current IP address of domain old-k8s-version-368115 in network mk-old-k8s-version-368115
	I0915 08:07:02.906617   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | I0915 08:07:02.906542   62711 retry.go:31] will retry after 633.069911ms: waiting for machine to come up
	I0915 08:07:03.540920   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:03.541465   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | unable to find current IP address of domain old-k8s-version-368115 in network mk-old-k8s-version-368115
	I0915 08:07:03.541494   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | I0915 08:07:03.541406   62711 retry.go:31] will retry after 1.099897025s: waiting for machine to come up
	I0915 08:07:04.643083   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:04.643576   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | unable to find current IP address of domain old-k8s-version-368115 in network mk-old-k8s-version-368115
	I0915 08:07:04.643612   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | I0915 08:07:04.643539   62711 retry.go:31] will retry after 932.444579ms: waiting for machine to come up
	I0915 08:07:05.577636   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:05.578023   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | unable to find current IP address of domain old-k8s-version-368115 in network mk-old-k8s-version-368115
	I0915 08:07:05.578060   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | I0915 08:07:05.578003   62711 retry.go:31] will retry after 1.694786967s: waiting for machine to come up
	I0915 08:07:07.274799   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:07.275362   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | unable to find current IP address of domain old-k8s-version-368115 in network mk-old-k8s-version-368115
	I0915 08:07:07.275391   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | I0915 08:07:07.275296   62711 retry.go:31] will retry after 2.325006691s: waiting for machine to come up
	I0915 08:07:06.573303   61464 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.040426313s)
	I0915 08:07:06.573330   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:06.790611   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:06.862370   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:06.960025   61464 api_server.go:52] waiting for apiserver process to appear ...
	I0915 08:07:06.960120   61464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:07.461108   61464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:07.960542   61464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:08.461062   61464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:08.960376   61464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:08.985061   61464 api_server.go:72] duration metric: took 2.025034828s to wait for apiserver process to appear ...
	I0915 08:07:08.985090   61464 api_server.go:88] waiting for apiserver healthz status ...
	I0915 08:07:08.985112   61464 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0915 08:07:08.985630   61464 api_server.go:269] stopped: https://192.168.39.225:8443/healthz: Get "https://192.168.39.225:8443/healthz": dial tcp 192.168.39.225:8443: connect: connection refused
	I0915 08:07:09.485281   61464 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0915 08:07:09.602452   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:09.602932   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | unable to find current IP address of domain old-k8s-version-368115 in network mk-old-k8s-version-368115
	I0915 08:07:09.602970   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | I0915 08:07:09.602899   62711 retry.go:31] will retry after 2.419106475s: waiting for machine to come up
	I0915 08:07:12.025665   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:12.026197   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | unable to find current IP address of domain old-k8s-version-368115 in network mk-old-k8s-version-368115
	I0915 08:07:12.026226   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | I0915 08:07:12.026140   62711 retry.go:31] will retry after 3.126716687s: waiting for machine to come up
	I0915 08:07:11.956515   61464 api_server.go:279] https://192.168.39.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0915 08:07:11.956577   61464 api_server.go:103] status: https://192.168.39.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0915 08:07:11.956594   61464 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0915 08:07:12.007036   61464 api_server.go:279] https://192.168.39.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0915 08:07:12.007075   61464 api_server.go:103] status: https://192.168.39.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0915 08:07:12.007090   61464 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0915 08:07:12.020989   61464 api_server.go:279] https://192.168.39.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0915 08:07:12.021017   61464 api_server.go:103] status: https://192.168.39.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0915 08:07:12.485550   61464 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0915 08:07:12.489650   61464 api_server.go:279] https://192.168.39.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0915 08:07:12.489678   61464 api_server.go:103] status: https://192.168.39.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0915 08:07:12.985318   61464 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0915 08:07:12.994476   61464 api_server.go:279] https://192.168.39.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0915 08:07:12.994500   61464 api_server.go:103] status: https://192.168.39.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0915 08:07:13.486146   61464 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0915 08:07:13.493304   61464 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0915 08:07:13.502394   61464 api_server.go:141] control plane version: v1.31.1
	I0915 08:07:13.502424   61464 api_server.go:131] duration metric: took 4.517328072s to wait for apiserver health ...
	I0915 08:07:13.502433   61464 cni.go:84] Creating CNI manager for ""
	I0915 08:07:13.502438   61464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 08:07:13.504259   61464 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 08:07:13.505677   61464 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 08:07:13.533987   61464 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 08:07:13.564703   61464 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 08:07:13.581188   61464 system_pods.go:59] 8 kube-system pods found
	I0915 08:07:13.581223   61464 system_pods.go:61] "coredns-7c65d6cfc9-np76n" [a54ae610-21a2-491a-84b7-13fdd31ad5a2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0915 08:07:13.581231   61464 system_pods.go:61] "etcd-embed-certs-474196" [dd0695b8-d16b-4f34-adae-3c284f2ea135] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0915 08:07:13.581266   61464 system_pods.go:61] "kube-apiserver-embed-certs-474196" [319b041a-0bde-442e-8726-10164c01f732] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0915 08:07:13.581275   61464 system_pods.go:61] "kube-controller-manager-embed-certs-474196" [ca3e38d2-bb63-480c-b085-a89670340402] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0915 08:07:13.581287   61464 system_pods.go:61] "kube-proxy-5tmwl" [fdcd8093-0379-45b1-b02e-a4f61444848c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0915 08:07:13.581294   61464 system_pods.go:61] "kube-scheduler-embed-certs-474196" [04adc8e2-296f-40f9-bdbd-6e7ad416ce32] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0915 08:07:13.581299   61464 system_pods.go:61] "metrics-server-6867b74b74-mh8xh" [8e97a269-63e1-4fb0-b8b7-192535e25af0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 08:07:13.581309   61464 system_pods.go:61] "storage-provisioner" [baf93e99-ee90-4247-85c2-3ebb2324795d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0915 08:07:13.581316   61464 system_pods.go:74] duration metric: took 16.595233ms to wait for pod list to return data ...
	I0915 08:07:13.581325   61464 node_conditions.go:102] verifying NodePressure condition ...
	I0915 08:07:13.587843   61464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 08:07:13.587865   61464 node_conditions.go:123] node cpu capacity is 2
	I0915 08:07:13.587876   61464 node_conditions.go:105] duration metric: took 6.546525ms to run NodePressure ...
	I0915 08:07:13.587889   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:13.873694   61464 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0915 08:07:13.877961   61464 kubeadm.go:739] kubelet initialised
	I0915 08:07:13.877981   61464 kubeadm.go:740] duration metric: took 4.263528ms waiting for restarted kubelet to initialise ...
	I0915 08:07:13.877988   61464 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 08:07:13.883769   61464 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-np76n" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:13.889956   61464 pod_ready.go:98] node "embed-certs-474196" hosting pod "coredns-7c65d6cfc9-np76n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:13.889982   61464 pod_ready.go:82] duration metric: took 6.184369ms for pod "coredns-7c65d6cfc9-np76n" in "kube-system" namespace to be "Ready" ...
	E0915 08:07:13.889991   61464 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-474196" hosting pod "coredns-7c65d6cfc9-np76n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:13.889997   61464 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:13.895239   61464 pod_ready.go:98] node "embed-certs-474196" hosting pod "etcd-embed-certs-474196" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:13.895254   61464 pod_ready.go:82] duration metric: took 5.250475ms for pod "etcd-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	E0915 08:07:13.895261   61464 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-474196" hosting pod "etcd-embed-certs-474196" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:13.895266   61464 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:13.901477   61464 pod_ready.go:98] node "embed-certs-474196" hosting pod "kube-apiserver-embed-certs-474196" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:13.901501   61464 pod_ready.go:82] duration metric: took 6.226625ms for pod "kube-apiserver-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	E0915 08:07:13.901510   61464 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-474196" hosting pod "kube-apiserver-embed-certs-474196" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:13.901520   61464 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:13.968581   61464 pod_ready.go:98] node "embed-certs-474196" hosting pod "kube-controller-manager-embed-certs-474196" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:13.968611   61464 pod_ready.go:82] duration metric: took 67.080385ms for pod "kube-controller-manager-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	E0915 08:07:13.968623   61464 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-474196" hosting pod "kube-controller-manager-embed-certs-474196" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:13.968632   61464 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5tmwl" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:14.368965   61464 pod_ready.go:98] node "embed-certs-474196" hosting pod "kube-proxy-5tmwl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:14.368991   61464 pod_ready.go:82] duration metric: took 400.350313ms for pod "kube-proxy-5tmwl" in "kube-system" namespace to be "Ready" ...
	E0915 08:07:14.368999   61464 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-474196" hosting pod "kube-proxy-5tmwl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:14.369006   61464 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:14.769339   61464 pod_ready.go:98] node "embed-certs-474196" hosting pod "kube-scheduler-embed-certs-474196" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:14.769375   61464 pod_ready.go:82] duration metric: took 400.36229ms for pod "kube-scheduler-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	E0915 08:07:14.769385   61464 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-474196" hosting pod "kube-scheduler-embed-certs-474196" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:14.769391   61464 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:15.168711   61464 pod_ready.go:98] node "embed-certs-474196" hosting pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:15.168739   61464 pod_ready.go:82] duration metric: took 399.340377ms for pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace to be "Ready" ...
	E0915 08:07:15.168749   61464 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-474196" hosting pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:15.168758   61464 pod_ready.go:39] duration metric: took 1.29076089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 08:07:15.168772   61464 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 08:07:15.180980   61464 ops.go:34] apiserver oom_adj: -16
	I0915 08:07:15.181006   61464 kubeadm.go:597] duration metric: took 9.947249544s to restartPrimaryControlPlane
	I0915 08:07:15.181019   61464 kubeadm.go:394] duration metric: took 9.999588513s to StartCluster
	I0915 08:07:15.181039   61464 settings.go:142] acquiring lock: {Name:mkf5235d72fa0db4ee272126c244284fe5de298a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 08:07:15.181127   61464 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 08:07:15.183532   61464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/kubeconfig: {Name:mk08ce8dc701ab8f3c73b1f6ae730da0fbf561bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 08:07:15.183807   61464 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 08:07:15.183888   61464 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0915 08:07:15.183966   61464 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-474196"
	I0915 08:07:15.183983   61464 config.go:182] Loaded profile config "embed-certs-474196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 08:07:15.183990   61464 addons.go:69] Setting default-storageclass=true in profile "embed-certs-474196"
	I0915 08:07:15.183995   61464 addons.go:69] Setting metrics-server=true in profile "embed-certs-474196"
	I0915 08:07:15.183987   61464 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-474196"
	W0915 08:07:15.184033   61464 addons.go:243] addon storage-provisioner should already be in state true
	I0915 08:07:15.184074   61464 host.go:66] Checking if "embed-certs-474196" exists ...
	I0915 08:07:15.184012   61464 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-474196"
	I0915 08:07:15.184015   61464 addons.go:234] Setting addon metrics-server=true in "embed-certs-474196"
	W0915 08:07:15.184141   61464 addons.go:243] addon metrics-server should already be in state true
	I0915 08:07:15.184171   61464 host.go:66] Checking if "embed-certs-474196" exists ...
	I0915 08:07:15.184504   61464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:07:15.184529   61464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:07:15.184555   61464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:07:15.184569   61464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:07:15.184599   61464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:07:15.184630   61464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:07:15.186534   61464 out.go:177] * Verifying Kubernetes components...
	I0915 08:07:15.187875   61464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 08:07:15.199436   61464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I0915 08:07:15.199908   61464 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:07:15.200451   61464 main.go:141] libmachine: Using API Version  1
	I0915 08:07:15.200518   61464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:07:15.200907   61464 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:07:15.201529   61464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:07:15.201578   61464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:07:15.202179   61464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41337
	I0915 08:07:15.202194   61464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34041
	I0915 08:07:15.202503   61464 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:07:15.202622   61464 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:07:15.202904   61464 main.go:141] libmachine: Using API Version  1
	I0915 08:07:15.202917   61464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:07:15.203002   61464 main.go:141] libmachine: Using API Version  1
	I0915 08:07:15.203010   61464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:07:15.203191   61464 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:07:15.203275   61464 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:07:15.203313   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetState
	I0915 08:07:15.203721   61464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:07:15.203748   61464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:07:15.205939   61464 addons.go:234] Setting addon default-storageclass=true in "embed-certs-474196"
	W0915 08:07:15.205954   61464 addons.go:243] addon default-storageclass should already be in state true
	I0915 08:07:15.205976   61464 host.go:66] Checking if "embed-certs-474196" exists ...
	I0915 08:07:15.206222   61464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:07:15.206253   61464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:07:15.218563   61464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33007
	I0915 08:07:15.218985   61464 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:07:15.219544   61464 main.go:141] libmachine: Using API Version  1
	I0915 08:07:15.219567   61464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:07:15.219880   61464 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:07:15.220071   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetState
	I0915 08:07:15.221781   61464 main.go:141] libmachine: (embed-certs-474196) Calling .DriverName
	I0915 08:07:15.223050   61464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37055
	I0915 08:07:15.223477   61464 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:07:15.223872   61464 main.go:141] libmachine: Using API Version  1
	I0915 08:07:15.223891   61464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:07:15.223873   61464 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 08:07:15.224242   61464 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:07:15.224642   61464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:07:15.224668   61464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:07:15.225362   61464 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 08:07:15.225381   61464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 08:07:15.225398   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHHostname
	I0915 08:07:15.225893   61464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0915 08:07:15.226279   61464 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:07:15.226794   61464 main.go:141] libmachine: Using API Version  1
	I0915 08:07:15.226816   61464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:07:15.227185   61464 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:07:15.227391   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetState
	I0915 08:07:15.228835   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:07:15.229092   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:07:15.229124   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:07:15.229476   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHPort
	I0915 08:07:15.229493   61464 main.go:141] libmachine: (embed-certs-474196) Calling .DriverName
	I0915 08:07:15.229624   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:07:15.229873   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHUsername
	I0915 08:07:15.230012   61464 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/embed-certs-474196/id_rsa Username:docker}
	I0915 08:07:15.231337   61464 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0915 08:07:15.232767   61464 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 08:07:15.232786   61464 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 08:07:15.232804   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHHostname
	I0915 08:07:15.235494   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:07:15.235812   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:07:15.235838   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:07:15.236053   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHPort
	I0915 08:07:15.236211   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:07:15.236357   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHUsername
	I0915 08:07:15.236470   61464 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/embed-certs-474196/id_rsa Username:docker}
	I0915 08:07:15.241949   61464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37597
	I0915 08:07:15.242396   61464 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:07:15.242886   61464 main.go:141] libmachine: Using API Version  1
	I0915 08:07:15.242906   61464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:07:15.243257   61464 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:07:15.243410   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetState
	I0915 08:07:15.245022   61464 main.go:141] libmachine: (embed-certs-474196) Calling .DriverName
	I0915 08:07:15.245212   61464 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 08:07:15.245225   61464 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 08:07:15.245448   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHHostname
	I0915 08:07:15.248466   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:07:15.249009   61464 main.go:141] libmachine: (embed-certs-474196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f3:e9", ip: ""} in network mk-embed-certs-474196: {Iface:virbr1 ExpiryTime:2024-09-15 09:06:51 +0000 UTC Type:0 Mac:52:54:00:d3:f3:e9 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:embed-certs-474196 Clientid:01:52:54:00:d3:f3:e9}
	I0915 08:07:15.249028   61464 main.go:141] libmachine: (embed-certs-474196) DBG | domain embed-certs-474196 has defined IP address 192.168.39.225 and MAC address 52:54:00:d3:f3:e9 in network mk-embed-certs-474196
	I0915 08:07:15.249057   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHPort
	I0915 08:07:15.249196   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHKeyPath
	I0915 08:07:15.249295   61464 main.go:141] libmachine: (embed-certs-474196) Calling .GetSSHUsername
	I0915 08:07:15.249415   61464 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/embed-certs-474196/id_rsa Username:docker}
	I0915 08:07:15.420264   61464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 08:07:15.437759   61464 node_ready.go:35] waiting up to 6m0s for node "embed-certs-474196" to be "Ready" ...
	I0915 08:07:15.615701   61464 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 08:07:15.615720   61464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0915 08:07:15.618372   61464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 08:07:15.647145   61464 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 08:07:15.647178   61464 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 08:07:15.662780   61464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 08:07:15.706746   61464 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 08:07:15.706773   61464 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 08:07:15.770729   61464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 08:07:16.583862   61464 main.go:141] libmachine: Making call to close driver server
	I0915 08:07:16.583890   61464 main.go:141] libmachine: (embed-certs-474196) Calling .Close
	I0915 08:07:16.583915   61464 main.go:141] libmachine: Making call to close driver server
	I0915 08:07:16.583939   61464 main.go:141] libmachine: (embed-certs-474196) Calling .Close
	I0915 08:07:16.584257   61464 main.go:141] libmachine: Successfully made call to close driver server
	I0915 08:07:16.584264   61464 main.go:141] libmachine: Successfully made call to close driver server
	I0915 08:07:16.584276   61464 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 08:07:16.584279   61464 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 08:07:16.584287   61464 main.go:141] libmachine: Making call to close driver server
	I0915 08:07:16.584289   61464 main.go:141] libmachine: Making call to close driver server
	I0915 08:07:16.584295   61464 main.go:141] libmachine: (embed-certs-474196) Calling .Close
	I0915 08:07:16.584312   61464 main.go:141] libmachine: (embed-certs-474196) Calling .Close
	I0915 08:07:16.585854   61464 main.go:141] libmachine: (embed-certs-474196) DBG | Closing plugin on server side
	I0915 08:07:16.585878   61464 main.go:141] libmachine: (embed-certs-474196) DBG | Closing plugin on server side
	I0915 08:07:16.585882   61464 main.go:141] libmachine: Successfully made call to close driver server
	I0915 08:07:16.585918   61464 main.go:141] libmachine: Successfully made call to close driver server
	I0915 08:07:16.585930   61464 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 08:07:16.585920   61464 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 08:07:16.590947   61464 main.go:141] libmachine: Making call to close driver server
	I0915 08:07:16.590964   61464 main.go:141] libmachine: (embed-certs-474196) Calling .Close
	I0915 08:07:16.591205   61464 main.go:141] libmachine: Successfully made call to close driver server
	I0915 08:07:16.591224   61464 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 08:07:16.672923   61464 main.go:141] libmachine: Making call to close driver server
	I0915 08:07:16.672950   61464 main.go:141] libmachine: (embed-certs-474196) Calling .Close
	I0915 08:07:16.673251   61464 main.go:141] libmachine: Successfully made call to close driver server
	I0915 08:07:16.673272   61464 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 08:07:16.673282   61464 main.go:141] libmachine: Making call to close driver server
	I0915 08:07:16.673295   61464 main.go:141] libmachine: (embed-certs-474196) Calling .Close
	I0915 08:07:16.673503   61464 main.go:141] libmachine: Successfully made call to close driver server
	I0915 08:07:16.673514   61464 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 08:07:16.673525   61464 addons.go:475] Verifying addon metrics-server=true in "embed-certs-474196"
	I0915 08:07:16.676904   61464 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0915 08:07:15.154176   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:15.154684   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | unable to find current IP address of domain old-k8s-version-368115 in network mk-old-k8s-version-368115
	I0915 08:07:15.154731   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | I0915 08:07:15.154656   62711 retry.go:31] will retry after 4.316194548s: waiting for machine to come up
	I0915 08:07:20.835325   61251 start.go:364] duration metric: took 35.435596758s to acquireMachinesLock for "no-preload-778087"
	I0915 08:07:20.835377   61251 start.go:96] Skipping create...Using existing machine configuration
	I0915 08:07:20.835389   61251 fix.go:54] fixHost starting: 
	I0915 08:07:20.835824   61251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:07:20.835860   61251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:07:20.854242   61251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42875
	I0915 08:07:20.854631   61251 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:07:20.855142   61251 main.go:141] libmachine: Using API Version  1
	I0915 08:07:20.855170   61251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:07:20.855537   61251 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:07:20.855760   61251 main.go:141] libmachine: (no-preload-778087) Calling .DriverName
	I0915 08:07:20.855912   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetState
	I0915 08:07:20.857601   61251 fix.go:112] recreateIfNeeded on no-preload-778087: state=Stopped err=<nil>
	I0915 08:07:20.857630   61251 main.go:141] libmachine: (no-preload-778087) Calling .DriverName
	W0915 08:07:20.857788   61251 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 08:07:20.860368   61251 out.go:177] * Restarting existing kvm2 VM for "no-preload-778087" ...
	I0915 08:07:16.678457   61464 addons.go:510] duration metric: took 1.494573408s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0915 08:07:17.442716   61464 node_ready.go:53] node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:19.941537   61464 node_ready.go:53] node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:19.472536   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.473042   61935 main.go:141] libmachine: (old-k8s-version-368115) Found IP for machine: 192.168.50.132
	I0915 08:07:19.473070   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has current primary IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.473076   61935 main.go:141] libmachine: (old-k8s-version-368115) Reserving static IP address...
	I0915 08:07:19.473521   61935 main.go:141] libmachine: (old-k8s-version-368115) Reserved static IP address: 192.168.50.132
	I0915 08:07:19.473543   61935 main.go:141] libmachine: (old-k8s-version-368115) Waiting for SSH to be available...
	I0915 08:07:19.473564   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "old-k8s-version-368115", mac: "52:54:00:40:70:dc", ip: "192.168.50.132"} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:19.473588   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | skip adding static IP to network mk-old-k8s-version-368115 - found existing host DHCP lease matching {name: "old-k8s-version-368115", mac: "52:54:00:40:70:dc", ip: "192.168.50.132"}
	I0915 08:07:19.473604   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | Getting to WaitForSSH function...
	I0915 08:07:19.475977   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.476416   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:19.476452   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.476621   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | Using SSH client type: external
	I0915 08:07:19.476649   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/old-k8s-version-368115/id_rsa (-rw-------)
	I0915 08:07:19.476692   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/old-k8s-version-368115/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 08:07:19.476708   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | About to run SSH command:
	I0915 08:07:19.476726   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | exit 0
	I0915 08:07:19.598086   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | SSH cmd err, output: <nil>: 
	I0915 08:07:19.598577   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetConfigRaw
	I0915 08:07:19.599320   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetIP
	I0915 08:07:19.601804   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.602147   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:19.602183   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.602365   61935 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/old-k8s-version-368115/config.json ...
	I0915 08:07:19.602553   61935 machine.go:93] provisionDockerMachine start ...
	I0915 08:07:19.602570   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .DriverName
	I0915 08:07:19.602764   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHHostname
	I0915 08:07:19.604788   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.605113   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:19.605140   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.605292   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHPort
	I0915 08:07:19.605448   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHKeyPath
	I0915 08:07:19.605634   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHKeyPath
	I0915 08:07:19.605732   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHUsername
	I0915 08:07:19.605906   61935 main.go:141] libmachine: Using SSH client type: native
	I0915 08:07:19.606088   61935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0915 08:07:19.606099   61935 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 08:07:19.706170   61935 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0915 08:07:19.706204   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetMachineName
	I0915 08:07:19.706450   61935 buildroot.go:166] provisioning hostname "old-k8s-version-368115"
	I0915 08:07:19.706479   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetMachineName
	I0915 08:07:19.706628   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHHostname
	I0915 08:07:19.709185   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.709530   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:19.709557   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.709750   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHPort
	I0915 08:07:19.709929   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHKeyPath
	I0915 08:07:19.710077   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHKeyPath
	I0915 08:07:19.710201   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHUsername
	I0915 08:07:19.710357   61935 main.go:141] libmachine: Using SSH client type: native
	I0915 08:07:19.710576   61935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0915 08:07:19.710592   61935 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-368115 && echo "old-k8s-version-368115" | sudo tee /etc/hostname
	I0915 08:07:19.824991   61935 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-368115
	
	I0915 08:07:19.825015   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHHostname
	I0915 08:07:19.827714   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.828038   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:19.828065   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.828183   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHPort
	I0915 08:07:19.828377   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHKeyPath
	I0915 08:07:19.828496   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHKeyPath
	I0915 08:07:19.828657   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHUsername
	I0915 08:07:19.828847   61935 main.go:141] libmachine: Using SSH client type: native
	I0915 08:07:19.829045   61935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0915 08:07:19.829067   61935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-368115' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-368115/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-368115' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 08:07:19.938588   61935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 08:07:19.938611   61935 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 08:07:19.938650   61935 buildroot.go:174] setting up certificates
	I0915 08:07:19.938659   61935 provision.go:84] configureAuth start
	I0915 08:07:19.938667   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetMachineName
	I0915 08:07:19.938900   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetIP
	I0915 08:07:19.942082   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.942458   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:19.942486   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.942688   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHHostname
	I0915 08:07:19.944834   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.945116   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:19.945139   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:19.945222   61935 provision.go:143] copyHostCerts
	I0915 08:07:19.945281   61935 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 08:07:19.945290   61935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 08:07:19.945351   61935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 08:07:19.945485   61935 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 08:07:19.945499   61935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 08:07:19.945531   61935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 08:07:19.945602   61935 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 08:07:19.945612   61935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 08:07:19.945639   61935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 08:07:19.945703   61935 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-368115 san=[127.0.0.1 192.168.50.132 localhost minikube old-k8s-version-368115]
	I0915 08:07:20.220934   61935 provision.go:177] copyRemoteCerts
	I0915 08:07:20.221000   61935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 08:07:20.221040   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHHostname
	I0915 08:07:20.223882   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:20.224245   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:20.224277   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:20.224422   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHPort
	I0915 08:07:20.224632   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHKeyPath
	I0915 08:07:20.224801   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHUsername
	I0915 08:07:20.224918   61935 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/old-k8s-version-368115/id_rsa Username:docker}
	I0915 08:07:20.303793   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 08:07:20.329392   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0915 08:07:20.354121   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 08:07:20.378983   61935 provision.go:87] duration metric: took 440.311286ms to configureAuth
	I0915 08:07:20.379014   61935 buildroot.go:189] setting minikube options for container-runtime
	I0915 08:07:20.379219   61935 config.go:182] Loaded profile config "old-k8s-version-368115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0915 08:07:20.379310   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHHostname
	I0915 08:07:20.382427   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:20.382808   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:20.382829   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:20.383074   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHPort
	I0915 08:07:20.383285   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHKeyPath
	I0915 08:07:20.383432   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHKeyPath
	I0915 08:07:20.383553   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHUsername
	I0915 08:07:20.383703   61935 main.go:141] libmachine: Using SSH client type: native
	I0915 08:07:20.383916   61935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0915 08:07:20.383937   61935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 08:07:20.599955   61935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 08:07:20.599984   61935 machine.go:96] duration metric: took 997.418815ms to provisionDockerMachine
	I0915 08:07:20.600001   61935 start.go:293] postStartSetup for "old-k8s-version-368115" (driver="kvm2")
	I0915 08:07:20.600017   61935 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 08:07:20.600044   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .DriverName
	I0915 08:07:20.600407   61935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 08:07:20.600440   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHHostname
	I0915 08:07:20.603213   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:20.603639   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:20.603669   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:20.603855   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHPort
	I0915 08:07:20.604056   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHKeyPath
	I0915 08:07:20.604260   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHUsername
	I0915 08:07:20.604408   61935 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/old-k8s-version-368115/id_rsa Username:docker}
	I0915 08:07:20.685611   61935 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 08:07:20.690511   61935 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 08:07:20.690534   61935 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 08:07:20.690593   61935 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 08:07:20.690669   61935 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 08:07:20.690787   61935 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 08:07:20.701768   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 08:07:20.726238   61935 start.go:296] duration metric: took 126.219957ms for postStartSetup
	I0915 08:07:20.726273   61935 fix.go:56] duration metric: took 21.447576831s for fixHost
	I0915 08:07:20.726295   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHHostname
	I0915 08:07:20.728837   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:20.729177   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:20.729203   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:20.729358   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHPort
	I0915 08:07:20.729574   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHKeyPath
	I0915 08:07:20.729744   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHKeyPath
	I0915 08:07:20.729886   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHUsername
	I0915 08:07:20.730042   61935 main.go:141] libmachine: Using SSH client type: native
	I0915 08:07:20.730232   61935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0915 08:07:20.730243   61935 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 08:07:20.835181   61935 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726387640.811958042
	
	I0915 08:07:20.835205   61935 fix.go:216] guest clock: 1726387640.811958042
	I0915 08:07:20.835215   61935 fix.go:229] Guest: 2024-09-15 08:07:20.811958042 +0000 UTC Remote: 2024-09-15 08:07:20.726277111 +0000 UTC m=+212.862471981 (delta=85.680931ms)
	I0915 08:07:20.835242   61935 fix.go:200] guest clock delta is within tolerance: 85.680931ms
	I0915 08:07:20.835249   61935 start.go:83] releasing machines lock for "old-k8s-version-368115", held for 21.556590174s
	I0915 08:07:20.835275   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .DriverName
	I0915 08:07:20.835574   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetIP
	I0915 08:07:20.838852   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:20.839295   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:20.839326   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:20.839555   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .DriverName
	I0915 08:07:20.840093   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .DriverName
	I0915 08:07:20.840274   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .DriverName
	I0915 08:07:20.840394   61935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 08:07:20.840434   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHHostname
	I0915 08:07:20.840493   61935 ssh_runner.go:195] Run: cat /version.json
	I0915 08:07:20.840519   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHHostname
	I0915 08:07:20.843598   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:20.843999   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:20.844022   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:20.844043   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:20.844437   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHPort
	I0915 08:07:20.844651   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:20.844658   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHKeyPath
	I0915 08:07:20.844675   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:20.844873   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHUsername
	I0915 08:07:20.844874   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHPort
	I0915 08:07:20.845034   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHKeyPath
	I0915 08:07:20.845026   61935 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/old-k8s-version-368115/id_rsa Username:docker}
	I0915 08:07:20.845167   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetSSHUsername
	I0915 08:07:20.845284   61935 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/old-k8s-version-368115/id_rsa Username:docker}
	I0915 08:07:20.923184   61935 ssh_runner.go:195] Run: systemctl --version
	I0915 08:07:20.951778   61935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 08:07:21.104021   61935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 08:07:21.112313   61935 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 08:07:21.112397   61935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 08:07:21.129073   61935 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 08:07:21.129101   61935 start.go:495] detecting cgroup driver to use...
	I0915 08:07:21.129186   61935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 08:07:21.150980   61935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 08:07:21.168361   61935 docker.go:217] disabling cri-docker service (if available) ...
	I0915 08:07:21.168459   61935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 08:07:21.185065   61935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 08:07:21.205549   61935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 08:07:21.325838   61935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 08:07:21.510209   61935 docker.go:233] disabling docker service ...
	I0915 08:07:21.510278   61935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 08:07:21.525481   61935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 08:07:21.545326   61935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 08:07:21.677341   61935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 08:07:21.823539   61935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 08:07:21.841258   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 08:07:21.864568   61935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0915 08:07:21.864644   61935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:21.880195   61935 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 08:07:21.880271   61935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:21.896169   61935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:21.913348   61935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:21.929154   61935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 08:07:21.942938   61935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 08:07:21.955132   61935 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0915 08:07:21.955213   61935 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0915 08:07:21.971806   61935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 08:07:21.983865   61935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 08:07:22.139880   61935 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 08:07:22.268813   61935 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 08:07:22.268898   61935 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 08:07:22.275256   61935 start.go:563] Will wait 60s for crictl version
	I0915 08:07:22.275317   61935 ssh_runner.go:195] Run: which crictl
	I0915 08:07:22.279282   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 08:07:22.325509   61935 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 08:07:22.325610   61935 ssh_runner.go:195] Run: crio --version
	I0915 08:07:22.365746   61935 ssh_runner.go:195] Run: crio --version
	I0915 08:07:22.409351   61935 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0915 08:07:20.861980   61251 main.go:141] libmachine: (no-preload-778087) Calling .Start
	I0915 08:07:20.862156   61251 main.go:141] libmachine: (no-preload-778087) Ensuring networks are active...
	I0915 08:07:20.862903   61251 main.go:141] libmachine: (no-preload-778087) Ensuring network default is active
	I0915 08:07:20.863736   61251 main.go:141] libmachine: (no-preload-778087) Ensuring network mk-no-preload-778087 is active
	I0915 08:07:20.864181   61251 main.go:141] libmachine: (no-preload-778087) Getting domain xml...
	I0915 08:07:20.864870   61251 main.go:141] libmachine: (no-preload-778087) Creating domain...
	I0915 08:07:22.236866   61251 main.go:141] libmachine: (no-preload-778087) Waiting to get IP...
	I0915 08:07:22.237981   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:22.238675   61251 main.go:141] libmachine: (no-preload-778087) DBG | unable to find current IP address of domain no-preload-778087 in network mk-no-preload-778087
	I0915 08:07:22.238719   61251 main.go:141] libmachine: (no-preload-778087) DBG | I0915 08:07:22.238650   62956 retry.go:31] will retry after 210.754526ms: waiting for machine to come up
	I0915 08:07:22.451094   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:22.451668   61251 main.go:141] libmachine: (no-preload-778087) DBG | unable to find current IP address of domain no-preload-778087 in network mk-no-preload-778087
	I0915 08:07:22.451710   61251 main.go:141] libmachine: (no-preload-778087) DBG | I0915 08:07:22.451633   62956 retry.go:31] will retry after 342.619067ms: waiting for machine to come up
	I0915 08:07:22.795970   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:22.796585   61251 main.go:141] libmachine: (no-preload-778087) DBG | unable to find current IP address of domain no-preload-778087 in network mk-no-preload-778087
	I0915 08:07:22.796616   61251 main.go:141] libmachine: (no-preload-778087) DBG | I0915 08:07:22.796559   62956 retry.go:31] will retry after 486.688323ms: waiting for machine to come up
	I0915 08:07:22.410948   61935 main.go:141] libmachine: (old-k8s-version-368115) Calling .GetIP
	I0915 08:07:22.413825   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:22.414212   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:70:dc", ip: ""} in network mk-old-k8s-version-368115: {Iface:virbr2 ExpiryTime:2024-09-15 09:07:10 +0000 UTC Type:0 Mac:52:54:00:40:70:dc Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:old-k8s-version-368115 Clientid:01:52:54:00:40:70:dc}
	I0915 08:07:22.414248   61935 main.go:141] libmachine: (old-k8s-version-368115) DBG | domain old-k8s-version-368115 has defined IP address 192.168.50.132 and MAC address 52:54:00:40:70:dc in network mk-old-k8s-version-368115
	I0915 08:07:22.414645   61935 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0915 08:07:22.419603   61935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 08:07:22.433038   61935 kubeadm.go:883] updating cluster {Name:old-k8s-version-368115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-368115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 08:07:22.433166   61935 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0915 08:07:22.433222   61935 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 08:07:22.487677   61935 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0915 08:07:22.487743   61935 ssh_runner.go:195] Run: which lz4
	I0915 08:07:22.492269   61935 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0915 08:07:22.497379   61935 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0915 08:07:22.497414   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0915 08:07:21.942765   61464 node_ready.go:53] node "embed-certs-474196" has status "Ready":"False"
	I0915 08:07:22.442809   61464 node_ready.go:49] node "embed-certs-474196" has status "Ready":"True"
	I0915 08:07:22.442830   61464 node_ready.go:38] duration metric: took 7.005036263s for node "embed-certs-474196" to be "Ready" ...
	I0915 08:07:22.442838   61464 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 08:07:22.449563   61464 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-np76n" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:22.456191   61464 pod_ready.go:93] pod "coredns-7c65d6cfc9-np76n" in "kube-system" namespace has status "Ready":"True"
	I0915 08:07:22.456221   61464 pod_ready.go:82] duration metric: took 6.625675ms for pod "coredns-7c65d6cfc9-np76n" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:22.456235   61464 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:22.463484   61464 pod_ready.go:93] pod "etcd-embed-certs-474196" in "kube-system" namespace has status "Ready":"True"
	I0915 08:07:22.463510   61464 pod_ready.go:82] duration metric: took 7.265171ms for pod "etcd-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:22.463521   61464 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:22.468202   61464 pod_ready.go:93] pod "kube-apiserver-embed-certs-474196" in "kube-system" namespace has status "Ready":"True"
	I0915 08:07:22.468221   61464 pod_ready.go:82] duration metric: took 4.691031ms for pod "kube-apiserver-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:22.468231   61464 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:24.477185   61464 pod_ready.go:103] pod "kube-controller-manager-embed-certs-474196" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:23.285382   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:23.285898   61251 main.go:141] libmachine: (no-preload-778087) DBG | unable to find current IP address of domain no-preload-778087 in network mk-no-preload-778087
	I0915 08:07:23.285922   61251 main.go:141] libmachine: (no-preload-778087) DBG | I0915 08:07:23.285870   62956 retry.go:31] will retry after 603.962771ms: waiting for machine to come up
	I0915 08:07:23.891801   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:23.892367   61251 main.go:141] libmachine: (no-preload-778087) DBG | unable to find current IP address of domain no-preload-778087 in network mk-no-preload-778087
	I0915 08:07:23.892393   61251 main.go:141] libmachine: (no-preload-778087) DBG | I0915 08:07:23.892328   62956 retry.go:31] will retry after 693.598571ms: waiting for machine to come up
	I0915 08:07:24.587171   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:24.587788   61251 main.go:141] libmachine: (no-preload-778087) DBG | unable to find current IP address of domain no-preload-778087 in network mk-no-preload-778087
	I0915 08:07:24.587818   61251 main.go:141] libmachine: (no-preload-778087) DBG | I0915 08:07:24.587741   62956 retry.go:31] will retry after 641.795564ms: waiting for machine to come up
	I0915 08:07:25.231687   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:25.232200   61251 main.go:141] libmachine: (no-preload-778087) DBG | unable to find current IP address of domain no-preload-778087 in network mk-no-preload-778087
	I0915 08:07:25.232226   61251 main.go:141] libmachine: (no-preload-778087) DBG | I0915 08:07:25.232163   62956 retry.go:31] will retry after 823.686887ms: waiting for machine to come up
	I0915 08:07:26.057750   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:26.058334   61251 main.go:141] libmachine: (no-preload-778087) DBG | unable to find current IP address of domain no-preload-778087 in network mk-no-preload-778087
	I0915 08:07:26.058364   61251 main.go:141] libmachine: (no-preload-778087) DBG | I0915 08:07:26.058277   62956 retry.go:31] will retry after 1.116434398s: waiting for machine to come up
	I0915 08:07:27.176489   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:27.177041   61251 main.go:141] libmachine: (no-preload-778087) DBG | unable to find current IP address of domain no-preload-778087 in network mk-no-preload-778087
	I0915 08:07:27.177067   61251 main.go:141] libmachine: (no-preload-778087) DBG | I0915 08:07:27.176980   62956 retry.go:31] will retry after 1.65820425s: waiting for machine to come up
	I0915 08:07:24.186043   61935 crio.go:462] duration metric: took 1.693808785s to copy over tarball
	I0915 08:07:24.186125   61935 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0915 08:07:27.238768   61935 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.052603174s)
	I0915 08:07:27.238810   61935 crio.go:469] duration metric: took 3.052734783s to extract the tarball
	I0915 08:07:27.238820   61935 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0915 08:07:27.283098   61935 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 08:07:27.327007   61935 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0915 08:07:27.327043   61935 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0915 08:07:27.327092   61935 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 08:07:27.327160   61935 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0915 08:07:27.327190   61935 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0915 08:07:27.327202   61935 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0915 08:07:27.327421   61935 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0915 08:07:27.327169   61935 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0915 08:07:27.327427   61935 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0915 08:07:27.327446   61935 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0915 08:07:27.328921   61935 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0915 08:07:27.328936   61935 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0915 08:07:27.328993   61935 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0915 08:07:27.329034   61935 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0915 08:07:27.329064   61935 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 08:07:27.329177   61935 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0915 08:07:27.329216   61935 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0915 08:07:27.329727   61935 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0915 08:07:27.508890   61935 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0915 08:07:27.556740   61935 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0915 08:07:27.556797   61935 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0915 08:07:27.556846   61935 ssh_runner.go:195] Run: which crictl
	I0915 08:07:27.560889   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0915 08:07:27.581517   61935 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0915 08:07:27.594997   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0915 08:07:27.611468   61935 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0915 08:07:27.623240   61935 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0915 08:07:27.624807   61935 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0915 08:07:27.625433   61935 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0915 08:07:27.670783   61935 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0915 08:07:27.670880   61935 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0915 08:07:27.670936   61935 ssh_runner.go:195] Run: which crictl
	I0915 08:07:27.673332   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0915 08:07:27.690518   61935 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0915 08:07:27.804554   61935 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0915 08:07:27.804597   61935 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0915 08:07:27.804598   61935 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0915 08:07:27.804626   61935 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0915 08:07:27.804642   61935 ssh_runner.go:195] Run: which crictl
	I0915 08:07:27.804655   61935 ssh_runner.go:195] Run: which crictl
	I0915 08:07:27.804566   61935 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0915 08:07:27.804706   61935 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0915 08:07:27.804728   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0915 08:07:27.804734   61935 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0915 08:07:27.804747   61935 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0915 08:07:27.804751   61935 ssh_runner.go:195] Run: which crictl
	I0915 08:07:27.804762   61935 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0915 08:07:27.804829   61935 ssh_runner.go:195] Run: which crictl
	I0915 08:07:27.819514   61935 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0915 08:07:27.819602   61935 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0915 08:07:27.819651   61935 ssh_runner.go:195] Run: which crictl
	I0915 08:07:27.854939   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0915 08:07:27.854961   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0915 08:07:27.855024   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0915 08:07:27.855045   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0915 08:07:27.855047   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0915 08:07:27.855161   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0915 08:07:26.982614   61464 pod_ready.go:93] pod "kube-controller-manager-embed-certs-474196" in "kube-system" namespace has status "Ready":"True"
	I0915 08:07:26.982642   61464 pod_ready.go:82] duration metric: took 4.514402092s for pod "kube-controller-manager-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:26.982656   61464 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5tmwl" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:26.994821   61464 pod_ready.go:93] pod "kube-proxy-5tmwl" in "kube-system" namespace has status "Ready":"True"
	I0915 08:07:26.994853   61464 pod_ready.go:82] duration metric: took 12.188742ms for pod "kube-proxy-5tmwl" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:26.994867   61464 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:27.004244   61464 pod_ready.go:93] pod "kube-scheduler-embed-certs-474196" in "kube-system" namespace has status "Ready":"True"
	I0915 08:07:27.004269   61464 pod_ready.go:82] duration metric: took 9.395282ms for pod "kube-scheduler-embed-certs-474196" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:27.004280   61464 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace to be "Ready" ...
	I0915 08:07:29.152573   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:28.836426   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:28.836963   61251 main.go:141] libmachine: (no-preload-778087) DBG | unable to find current IP address of domain no-preload-778087 in network mk-no-preload-778087
	I0915 08:07:28.836994   61251 main.go:141] libmachine: (no-preload-778087) DBG | I0915 08:07:28.836904   62956 retry.go:31] will retry after 2.184756862s: waiting for machine to come up
	I0915 08:07:31.022983   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:31.023546   61251 main.go:141] libmachine: (no-preload-778087) DBG | unable to find current IP address of domain no-preload-778087 in network mk-no-preload-778087
	I0915 08:07:31.023568   61251 main.go:141] libmachine: (no-preload-778087) DBG | I0915 08:07:31.023498   62956 retry.go:31] will retry after 1.868005158s: waiting for machine to come up
	I0915 08:07:27.990359   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0915 08:07:27.990414   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0915 08:07:27.990435   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0915 08:07:27.994409   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0915 08:07:27.994507   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0915 08:07:27.994529   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0915 08:07:28.126553   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0915 08:07:28.130000   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0915 08:07:28.130112   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0915 08:07:28.151612   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0915 08:07:28.151663   61935 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0915 08:07:28.151733   61935 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0915 08:07:28.254324   61935 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0915 08:07:28.254353   61935 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0915 08:07:28.254411   61935 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0915 08:07:28.266998   61935 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0915 08:07:28.267105   61935 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0915 08:07:28.518338   61935 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 08:07:28.659039   61935 cache_images.go:92] duration metric: took 1.331976586s to LoadCachedImages
	W0915 08:07:28.659167   61935 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0915 08:07:28.659183   61935 kubeadm.go:934] updating node { 192.168.50.132 8443 v1.20.0 crio true true} ...
	I0915 08:07:28.659305   61935 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-368115 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-368115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 08:07:28.659371   61935 ssh_runner.go:195] Run: crio config
	I0915 08:07:28.716567   61935 cni.go:84] Creating CNI manager for ""
	I0915 08:07:28.716589   61935 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 08:07:28.716600   61935 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 08:07:28.716619   61935 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.132 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-368115 NodeName:old-k8s-version-368115 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0915 08:07:28.716753   61935 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-368115"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 08:07:28.716813   61935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0915 08:07:28.730187   61935 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 08:07:28.730260   61935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 08:07:28.743112   61935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0915 08:07:28.762102   61935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 08:07:28.779550   61935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0915 08:07:28.797447   61935 ssh_runner.go:195] Run: grep 192.168.50.132	control-plane.minikube.internal$ /etc/hosts
	I0915 08:07:28.801418   61935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 08:07:28.813932   61935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 08:07:28.949727   61935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 08:07:28.968176   61935 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/old-k8s-version-368115 for IP: 192.168.50.132
	I0915 08:07:28.968199   61935 certs.go:194] generating shared ca certs ...
	I0915 08:07:28.968218   61935 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 08:07:28.968391   61935 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 08:07:28.968452   61935 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 08:07:28.968469   61935 certs.go:256] generating profile certs ...
	I0915 08:07:28.968594   61935 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/old-k8s-version-368115/client.key
	I0915 08:07:28.968658   61935 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/old-k8s-version-368115/apiserver.key.4470998f
	I0915 08:07:28.968729   61935 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/old-k8s-version-368115/proxy-client.key
	I0915 08:07:28.968887   61935 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 08:07:28.968935   61935 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 08:07:28.968944   61935 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 08:07:28.968979   61935 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 08:07:28.969013   61935 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 08:07:28.969045   61935 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 08:07:28.969103   61935 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 08:07:28.969708   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 08:07:29.003925   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 08:07:29.038038   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 08:07:29.072101   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 08:07:29.115563   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/old-k8s-version-368115/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0915 08:07:29.169753   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/old-k8s-version-368115/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 08:07:29.202922   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/old-k8s-version-368115/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 08:07:29.242397   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/old-k8s-version-368115/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 08:07:29.271208   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 08:07:29.298849   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 08:07:29.324521   61935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 08:07:29.349460   61935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 08:07:29.366812   61935 ssh_runner.go:195] Run: openssl version
	I0915 08:07:29.372555   61935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 08:07:29.383100   61935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 08:07:29.387609   61935 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 08:07:29.387656   61935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 08:07:29.393569   61935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 08:07:29.404220   61935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 08:07:29.415062   61935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 08:07:29.419561   61935 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 08:07:29.419613   61935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 08:07:29.425262   61935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 08:07:29.436106   61935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 08:07:29.447819   61935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 08:07:29.453049   61935 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 08:07:29.453114   61935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 08:07:29.459134   61935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 08:07:29.470802   61935 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 08:07:29.475353   61935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 08:07:29.481354   61935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 08:07:29.487032   61935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 08:07:29.493094   61935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 08:07:29.499181   61935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 08:07:29.505187   61935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 08:07:29.512622   61935 kubeadm.go:392] StartCluster: {Name:old-k8s-version-368115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-368115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 08:07:29.512744   61935 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 08:07:29.512791   61935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 08:07:29.554704   61935 cri.go:89] found id: ""
	I0915 08:07:29.554779   61935 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 08:07:29.566209   61935 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0915 08:07:29.566232   61935 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0915 08:07:29.566286   61935 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0915 08:07:29.577114   61935 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 08:07:29.578000   61935 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-368115" does not appear in /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 08:07:29.578589   61935 kubeconfig.go:62] /home/jenkins/minikube-integration/19644-6166/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-368115" cluster setting kubeconfig missing "old-k8s-version-368115" context setting]
	I0915 08:07:29.579360   61935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/kubeconfig: {Name:mk08ce8dc701ab8f3c73b1f6ae730da0fbf561bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 08:07:29.670602   61935 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 08:07:29.681676   61935 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.132
	I0915 08:07:29.681715   61935 kubeadm.go:1160] stopping kube-system containers ...
	I0915 08:07:29.681728   61935 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0915 08:07:29.681784   61935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 08:07:29.724645   61935 cri.go:89] found id: ""
	I0915 08:07:29.724706   61935 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0915 08:07:29.742277   61935 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 08:07:29.756781   61935 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 08:07:29.756801   61935 kubeadm.go:157] found existing configuration files:
	
	I0915 08:07:29.756857   61935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 08:07:29.766033   61935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 08:07:29.766085   61935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 08:07:29.775031   61935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 08:07:29.783695   61935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 08:07:29.783753   61935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 08:07:29.793723   61935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 08:07:29.802350   61935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 08:07:29.802400   61935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 08:07:29.811764   61935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 08:07:29.820606   61935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 08:07:29.820656   61935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 08:07:29.830488   61935 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 08:07:29.841052   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:29.971862   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:30.955524   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:31.202372   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:31.294411   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:31.387526   61935 api_server.go:52] waiting for apiserver process to appear ...
	I0915 08:07:31.387633   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:31.887655   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:32.388155   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:32.887732   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:31.510983   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:33.512770   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:32.893344   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:32.893902   61251 main.go:141] libmachine: (no-preload-778087) DBG | unable to find current IP address of domain no-preload-778087 in network mk-no-preload-778087
	I0915 08:07:32.893928   61251 main.go:141] libmachine: (no-preload-778087) DBG | I0915 08:07:32.893872   62956 retry.go:31] will retry after 2.446085678s: waiting for machine to come up
	I0915 08:07:35.343548   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:35.344031   61251 main.go:141] libmachine: (no-preload-778087) DBG | unable to find current IP address of domain no-preload-778087 in network mk-no-preload-778087
	I0915 08:07:35.344072   61251 main.go:141] libmachine: (no-preload-778087) DBG | I0915 08:07:35.343976   62956 retry.go:31] will retry after 4.275795715s: waiting for machine to come up
	I0915 08:07:33.388239   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:33.887952   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:34.388094   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:34.888512   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:35.388665   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:35.887740   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:36.388502   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:36.888081   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:37.388022   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:37.888490   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:36.010291   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:38.011302   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:40.511656   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:39.622798   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:39.623240   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has current primary IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:39.623265   61251 main.go:141] libmachine: (no-preload-778087) Found IP for machine: 192.168.61.247
	I0915 08:07:39.623304   61251 main.go:141] libmachine: (no-preload-778087) Reserving static IP address...
	I0915 08:07:39.623638   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "no-preload-778087", mac: "52:54:00:48:53:6a", ip: "192.168.61.247"} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:39.623657   61251 main.go:141] libmachine: (no-preload-778087) Reserved static IP address: 192.168.61.247
	I0915 08:07:39.623669   61251 main.go:141] libmachine: (no-preload-778087) DBG | skip adding static IP to network mk-no-preload-778087 - found existing host DHCP lease matching {name: "no-preload-778087", mac: "52:54:00:48:53:6a", ip: "192.168.61.247"}
	I0915 08:07:39.623681   61251 main.go:141] libmachine: (no-preload-778087) DBG | Getting to WaitForSSH function...
	I0915 08:07:39.623696   61251 main.go:141] libmachine: (no-preload-778087) Waiting for SSH to be available...
	I0915 08:07:39.625818   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:39.626089   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:39.626120   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:39.626264   61251 main.go:141] libmachine: (no-preload-778087) DBG | Using SSH client type: external
	I0915 08:07:39.626286   61251 main.go:141] libmachine: (no-preload-778087) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/no-preload-778087/id_rsa (-rw-------)
	I0915 08:07:39.626322   61251 main.go:141] libmachine: (no-preload-778087) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/no-preload-778087/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 08:07:39.626334   61251 main.go:141] libmachine: (no-preload-778087) DBG | About to run SSH command:
	I0915 08:07:39.626342   61251 main.go:141] libmachine: (no-preload-778087) DBG | exit 0
	I0915 08:07:39.750175   61251 main.go:141] libmachine: (no-preload-778087) DBG | SSH cmd err, output: <nil>: 
	I0915 08:07:39.750556   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetConfigRaw
	I0915 08:07:39.751167   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetIP
	I0915 08:07:39.753818   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:39.754228   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:39.754261   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:39.754464   61251 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/no-preload-778087/config.json ...
	I0915 08:07:39.754635   61251 machine.go:93] provisionDockerMachine start ...
	I0915 08:07:39.754652   61251 main.go:141] libmachine: (no-preload-778087) Calling .DriverName
	I0915 08:07:39.754861   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHHostname
	I0915 08:07:39.757271   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:39.757620   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:39.757641   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:39.757775   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHPort
	I0915 08:07:39.757984   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:07:39.758116   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:07:39.758244   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHUsername
	I0915 08:07:39.758377   61251 main.go:141] libmachine: Using SSH client type: native
	I0915 08:07:39.758541   61251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0915 08:07:39.758550   61251 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 08:07:39.862312   61251 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0915 08:07:39.862389   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetMachineName
	I0915 08:07:39.862652   61251 buildroot.go:166] provisioning hostname "no-preload-778087"
	I0915 08:07:39.862682   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetMachineName
	I0915 08:07:39.862835   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHHostname
	I0915 08:07:39.865752   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:39.866208   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:39.866238   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:39.866529   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHPort
	I0915 08:07:39.866765   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:07:39.866924   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:07:39.867065   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHUsername
	I0915 08:07:39.867222   61251 main.go:141] libmachine: Using SSH client type: native
	I0915 08:07:39.867427   61251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0915 08:07:39.867443   61251 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-778087 && echo "no-preload-778087" | sudo tee /etc/hostname
	I0915 08:07:39.985041   61251 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-778087
	
	I0915 08:07:39.985067   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHHostname
	I0915 08:07:39.988109   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:39.988477   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:39.988515   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:39.988592   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHPort
	I0915 08:07:39.988790   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:07:39.988963   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:07:39.989092   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHUsername
	I0915 08:07:39.989270   61251 main.go:141] libmachine: Using SSH client type: native
	I0915 08:07:39.989458   61251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0915 08:07:39.989483   61251 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-778087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-778087/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-778087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 08:07:40.102940   61251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 08:07:40.102969   61251 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 08:07:40.102995   61251 buildroot.go:174] setting up certificates
	I0915 08:07:40.103008   61251 provision.go:84] configureAuth start
	I0915 08:07:40.103020   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetMachineName
	I0915 08:07:40.103339   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetIP
	I0915 08:07:40.106557   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.106866   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:40.106894   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.107064   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHHostname
	I0915 08:07:40.109597   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.109963   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:40.109980   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.110109   61251 provision.go:143] copyHostCerts
	I0915 08:07:40.110168   61251 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 08:07:40.110182   61251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 08:07:40.110274   61251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 08:07:40.110379   61251 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 08:07:40.110388   61251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 08:07:40.110419   61251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 08:07:40.110473   61251 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 08:07:40.110480   61251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 08:07:40.110497   61251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 08:07:40.110544   61251 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.no-preload-778087 san=[127.0.0.1 192.168.61.247 localhost minikube no-preload-778087]
	I0915 08:07:40.157961   61251 provision.go:177] copyRemoteCerts
	I0915 08:07:40.158018   61251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 08:07:40.158045   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHHostname
	I0915 08:07:40.160612   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.160873   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:40.160904   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.161068   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHPort
	I0915 08:07:40.161246   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:07:40.161408   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHUsername
	I0915 08:07:40.161576   61251 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/no-preload-778087/id_rsa Username:docker}
	I0915 08:07:40.244148   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 08:07:40.270870   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0915 08:07:40.294382   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 08:07:40.318554   61251 provision.go:87] duration metric: took 215.533539ms to configureAuth
	I0915 08:07:40.318584   61251 buildroot.go:189] setting minikube options for container-runtime
	I0915 08:07:40.318766   61251 config.go:182] Loaded profile config "no-preload-778087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 08:07:40.318836   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHHostname
	I0915 08:07:40.321431   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.321841   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:40.321881   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.322012   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHPort
	I0915 08:07:40.322230   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:07:40.322355   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:07:40.322500   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHUsername
	I0915 08:07:40.322649   61251 main.go:141] libmachine: Using SSH client type: native
	I0915 08:07:40.322807   61251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0915 08:07:40.322822   61251 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 08:07:40.548256   61251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 08:07:40.548283   61251 machine.go:96] duration metric: took 793.637333ms to provisionDockerMachine
	I0915 08:07:40.548293   61251 start.go:293] postStartSetup for "no-preload-778087" (driver="kvm2")
	I0915 08:07:40.548303   61251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 08:07:40.548319   61251 main.go:141] libmachine: (no-preload-778087) Calling .DriverName
	I0915 08:07:40.548622   61251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 08:07:40.548651   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHHostname
	I0915 08:07:40.551242   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.551611   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:40.551643   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.551748   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHPort
	I0915 08:07:40.551954   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:07:40.552087   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHUsername
	I0915 08:07:40.552257   61251 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/no-preload-778087/id_rsa Username:docker}
	I0915 08:07:40.632674   61251 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 08:07:40.637452   61251 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 08:07:40.637483   61251 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 08:07:40.637568   61251 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 08:07:40.637646   61251 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 08:07:40.637731   61251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 08:07:40.647129   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 08:07:40.672076   61251 start.go:296] duration metric: took 123.772036ms for postStartSetup
	I0915 08:07:40.672117   61251 fix.go:56] duration metric: took 19.836728929s for fixHost
	I0915 08:07:40.672140   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHHostname
	I0915 08:07:40.674820   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.675170   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:40.675197   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.675345   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHPort
	I0915 08:07:40.675540   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:07:40.675706   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:07:40.675828   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHUsername
	I0915 08:07:40.675975   61251 main.go:141] libmachine: Using SSH client type: native
	I0915 08:07:40.676145   61251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0915 08:07:40.676155   61251 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 08:07:40.778394   61251 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726387660.750387953
	
	I0915 08:07:40.778417   61251 fix.go:216] guest clock: 1726387660.750387953
	I0915 08:07:40.778427   61251 fix.go:229] Guest: 2024-09-15 08:07:40.750387953 +0000 UTC Remote: 2024-09-15 08:07:40.672122646 +0000 UTC m=+337.859282313 (delta=78.265307ms)
	I0915 08:07:40.778450   61251 fix.go:200] guest clock delta is within tolerance: 78.265307ms
	I0915 08:07:40.778470   61251 start.go:83] releasing machines lock for "no-preload-778087", held for 19.943117688s
	I0915 08:07:40.778498   61251 main.go:141] libmachine: (no-preload-778087) Calling .DriverName
	I0915 08:07:40.778800   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetIP
	I0915 08:07:40.781625   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.781956   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:40.781981   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.782085   61251 main.go:141] libmachine: (no-preload-778087) Calling .DriverName
	I0915 08:07:40.782565   61251 main.go:141] libmachine: (no-preload-778087) Calling .DriverName
	I0915 08:07:40.782701   61251 main.go:141] libmachine: (no-preload-778087) Calling .DriverName
	I0915 08:07:40.782771   61251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 08:07:40.782823   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHHostname
	I0915 08:07:40.782915   61251 ssh_runner.go:195] Run: cat /version.json
	I0915 08:07:40.782944   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHHostname
	I0915 08:07:40.785278   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.785632   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:40.785660   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.785724   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.785800   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHPort
	I0915 08:07:40.785971   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:07:40.786121   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHUsername
	I0915 08:07:40.786160   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:40.786183   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:40.786297   61251 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/no-preload-778087/id_rsa Username:docker}
	I0915 08:07:40.786310   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHPort
	I0915 08:07:40.786559   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:07:40.786706   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHUsername
	I0915 08:07:40.786850   61251 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/no-preload-778087/id_rsa Username:docker}
	I0915 08:07:40.887589   61251 ssh_runner.go:195] Run: systemctl --version
	I0915 08:07:40.893783   61251 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 08:07:41.035558   61251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 08:07:41.042465   61251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 08:07:41.042516   61251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 08:07:41.057972   61251 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 08:07:41.057995   61251 start.go:495] detecting cgroup driver to use...
	I0915 08:07:41.058061   61251 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 08:07:41.075284   61251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 08:07:41.089793   61251 docker.go:217] disabling cri-docker service (if available) ...
	I0915 08:07:41.089861   61251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 08:07:41.103954   61251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 08:07:41.118559   61251 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 08:07:41.242558   61251 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 08:07:41.386388   61251 docker.go:233] disabling docker service ...
	I0915 08:07:41.386469   61251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 08:07:41.403751   61251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 08:07:41.417578   61251 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 08:07:41.561872   61251 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 08:07:41.683322   61251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 08:07:41.698736   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 08:07:41.718187   61251 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 08:07:41.718261   61251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:41.728621   61251 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 08:07:41.728691   61251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:41.738984   61251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:41.748958   61251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:41.758888   61251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 08:07:41.769201   61251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:41.779095   61251 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:41.795850   61251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 08:07:41.805658   61251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 08:07:41.814608   61251 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0915 08:07:41.814675   61251 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0915 08:07:41.827985   61251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 08:07:41.839028   61251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 08:07:41.955863   61251 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 08:07:42.056007   61251 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 08:07:42.056070   61251 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 08:07:42.060890   61251 start.go:563] Will wait 60s for crictl version
	I0915 08:07:42.060939   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:07:42.064831   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 08:07:42.101188   61251 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 08:07:42.101282   61251 ssh_runner.go:195] Run: crio --version
	I0915 08:07:42.129685   61251 ssh_runner.go:195] Run: crio --version
	I0915 08:07:42.160022   61251 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 08:07:42.161293   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetIP
	I0915 08:07:42.164007   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:42.164351   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:07:42.164373   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:07:42.164573   61251 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0915 08:07:42.168754   61251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 08:07:42.180933   61251 kubeadm.go:883] updating cluster {Name:no-preload-778087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-778087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.247 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 08:07:42.181041   61251 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 08:07:42.181070   61251 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 08:07:42.215103   61251 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0915 08:07:42.215129   61251 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0915 08:07:42.215171   61251 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 08:07:42.215221   61251 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0915 08:07:42.215240   61251 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0915 08:07:42.215258   61251 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0915 08:07:42.215358   61251 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0915 08:07:42.215362   61251 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0915 08:07:42.215399   61251 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0915 08:07:42.215604   61251 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0915 08:07:42.216554   61251 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0915 08:07:42.216663   61251 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0915 08:07:42.216675   61251 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0915 08:07:42.216685   61251 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 08:07:42.216685   61251 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0915 08:07:42.216786   61251 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0915 08:07:42.216821   61251 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0915 08:07:42.216863   61251 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0915 08:07:42.370013   61251 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0915 08:07:42.386688   61251 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0915 08:07:42.389110   61251 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0915 08:07:42.390090   61251 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0915 08:07:42.394859   61251 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0915 08:07:42.450082   61251 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0915 08:07:42.450129   61251 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0915 08:07:42.450168   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:07:42.458866   61251 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0915 08:07:42.472668   61251 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0915 08:07:42.496821   61251 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0915 08:07:42.496873   61251 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0915 08:07:42.496927   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:07:42.499987   61251 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0915 08:07:42.500025   61251 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0915 08:07:42.500069   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:07:42.506020   61251 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0915 08:07:42.506058   61251 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0915 08:07:42.506102   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:07:42.515969   61251 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0915 08:07:42.516008   61251 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0915 08:07:42.516024   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0915 08:07:42.516042   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:07:42.677240   61251 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0915 08:07:42.677284   61251 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0915 08:07:42.677314   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0915 08:07:42.677326   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0915 08:07:42.677317   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:07:42.677408   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0915 08:07:42.677413   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0915 08:07:42.677516   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0915 08:07:42.788551   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0915 08:07:42.790081   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0915 08:07:42.790167   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0915 08:07:42.790216   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0915 08:07:42.803298   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0915 08:07:42.803356   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0915 08:07:38.387723   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:38.887935   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:39.388118   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:39.888386   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:40.387796   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:40.888390   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:41.388041   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:41.888393   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:42.387686   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:42.888748   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:43.010710   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:45.562202   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:42.919096   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0915 08:07:42.919141   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0915 08:07:42.919195   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0915 08:07:42.919255   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0915 08:07:42.940682   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0915 08:07:42.949337   61251 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0915 08:07:42.949423   61251 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0915 08:07:43.048446   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0915 08:07:43.048453   61251 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0915 08:07:43.048540   61251 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0915 08:07:43.048560   61251 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0915 08:07:43.048587   61251 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0915 08:07:43.048632   61251 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0915 08:07:43.048632   61251 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0915 08:07:43.048690   61251 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0915 08:07:43.048707   61251 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0915 08:07:43.048730   61251 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0915 08:07:43.048759   61251 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0915 08:07:43.048767   61251 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0915 08:07:43.066437   61251 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0915 08:07:43.103786   61251 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0915 08:07:43.103787   61251 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0915 08:07:43.103836   61251 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0915 08:07:43.103874   61251 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0915 08:07:43.103908   61251 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0915 08:07:43.430643   61251 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 08:07:45.655249   61251 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.551320535s)
	I0915 08:07:45.655288   61251 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0915 08:07:45.655288   61251 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.606501726s)
	I0915 08:07:45.655307   61251 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0915 08:07:45.655313   61251 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.224643123s)
	I0915 08:07:45.655347   61251 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0915 08:07:45.655349   61251 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0915 08:07:45.655385   61251 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 08:07:45.655409   61251 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0915 08:07:45.655442   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:07:47.635430   61251 ssh_runner.go:235] Completed: which crictl: (1.979950974s)
	I0915 08:07:47.635477   61251 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.980044018s)
	I0915 08:07:47.635508   61251 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0915 08:07:47.635511   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 08:07:47.635565   61251 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0915 08:07:47.635613   61251 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0915 08:07:47.693314   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 08:07:43.388037   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:43.888048   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:44.388308   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:44.888162   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:45.387737   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:45.888114   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:46.388677   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:46.888409   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:47.387913   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:47.887701   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:48.010554   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:50.011317   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:51.034694   61251 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.341341612s)
	I0915 08:07:51.034786   61251 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 08:07:51.034924   61251 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.399298195s)
	I0915 08:07:51.034941   61251 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0915 08:07:51.034965   61251 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0915 08:07:51.035015   61251 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0915 08:07:51.085503   61251 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0915 08:07:51.085622   61251 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0915 08:07:48.388038   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:48.888366   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:49.387706   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:49.888377   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:50.387822   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:50.887795   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:51.388645   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:51.888176   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:52.388324   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:52.888636   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:52.011569   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:54.511329   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:52.912050   61251 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.877006563s)
	I0915 08:07:52.912091   61251 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0915 08:07:52.912117   61251 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0915 08:07:52.912207   61251 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0915 08:07:52.912055   61251 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.826409781s)
	I0915 08:07:52.912251   61251 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0915 08:07:54.365362   61251 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.453125176s)
	I0915 08:07:54.365405   61251 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0915 08:07:54.365437   61251 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0915 08:07:54.365498   61251 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0915 08:07:56.429293   61251 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.063767989s)
	I0915 08:07:56.429327   61251 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0915 08:07:56.429361   61251 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0915 08:07:56.429431   61251 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0915 08:07:57.081775   61251 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19644-6166/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0915 08:07:57.081837   61251 cache_images.go:123] Successfully loaded all cached images
	I0915 08:07:57.081845   61251 cache_images.go:92] duration metric: took 14.866702228s to LoadCachedImages
	I0915 08:07:57.081860   61251 kubeadm.go:934] updating node { 192.168.61.247 8443 v1.31.1 crio true true} ...
	I0915 08:07:57.082143   61251 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-778087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-778087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 08:07:57.082254   61251 ssh_runner.go:195] Run: crio config
	I0915 08:07:57.125858   61251 cni.go:84] Creating CNI manager for ""
	I0915 08:07:57.125879   61251 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 08:07:57.125888   61251 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 08:07:57.125907   61251 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.247 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-778087 NodeName:no-preload-778087 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 08:07:57.126038   61251 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-778087"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 08:07:57.126101   61251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 08:07:57.136968   61251 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 08:07:57.137025   61251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 08:07:57.146265   61251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0915 08:07:57.162379   61251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 08:07:57.178407   61251 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0915 08:07:57.194504   61251 ssh_runner.go:195] Run: grep 192.168.61.247	control-plane.minikube.internal$ /etc/hosts
	I0915 08:07:57.198229   61251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 08:07:57.209889   61251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 08:07:57.344384   61251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 08:07:57.360766   61251 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/no-preload-778087 for IP: 192.168.61.247
	I0915 08:07:57.360792   61251 certs.go:194] generating shared ca certs ...
	I0915 08:07:57.360821   61251 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 08:07:57.360990   61251 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 08:07:57.361042   61251 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 08:07:57.361055   61251 certs.go:256] generating profile certs ...
	I0915 08:07:57.361192   61251 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/no-preload-778087/client.key
	I0915 08:07:57.361284   61251 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/no-preload-778087/apiserver.key.6390af51
	I0915 08:07:57.361347   61251 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/no-preload-778087/proxy-client.key
	I0915 08:07:57.361549   61251 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 08:07:57.361638   61251 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 08:07:57.361654   61251 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 08:07:57.361704   61251 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 08:07:57.361737   61251 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 08:07:57.361768   61251 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 08:07:57.361843   61251 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 08:07:57.362760   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 08:07:57.394241   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 08:07:57.424934   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 08:07:57.460876   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 08:07:57.495364   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/no-preload-778087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0915 08:07:57.529725   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/no-preload-778087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 08:07:57.567670   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/no-preload-778087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 08:07:57.590786   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/no-preload-778087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 08:07:57.613534   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 08:07:57.636914   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 08:07:57.668206   61251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 08:07:57.693253   61251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 08:07:57.710609   61251 ssh_runner.go:195] Run: openssl version
	I0915 08:07:57.717346   61251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 08:07:57.728725   61251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 08:07:57.733061   61251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 08:07:57.733119   61251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 08:07:57.738941   61251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 08:07:57.750658   61251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 08:07:57.761933   61251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 08:07:57.766456   61251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 08:07:57.766533   61251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 08:07:57.772300   61251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 08:07:57.783031   61251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 08:07:57.793351   61251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 08:07:57.797729   61251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 08:07:57.797764   61251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 08:07:57.803267   61251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 08:07:57.813413   61251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 08:07:57.817815   61251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 08:07:57.823545   61251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 08:07:57.829603   61251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 08:07:57.835807   61251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 08:07:57.841359   61251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 08:07:57.846967   61251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 08:07:53.388053   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:53.887767   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:54.388604   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:54.888756   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:55.388570   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:55.887858   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:56.388391   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:56.888682   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:57.388111   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:57.887850   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:57.010435   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:59.012241   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:07:57.852886   61251 kubeadm.go:392] StartCluster: {Name:no-preload-778087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-778087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.247 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 08:07:57.852969   61251 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 08:07:57.853011   61251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 08:07:57.894035   61251 cri.go:89] found id: ""
	I0915 08:07:57.894111   61251 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 08:07:57.904788   61251 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0915 08:07:57.904807   61251 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0915 08:07:57.904848   61251 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0915 08:07:57.914958   61251 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 08:07:57.916098   61251 kubeconfig.go:125] found "no-preload-778087" server: "https://192.168.61.247:8443"
	I0915 08:07:57.918349   61251 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 08:07:57.928644   61251 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.247
	I0915 08:07:57.928704   61251 kubeadm.go:1160] stopping kube-system containers ...
	I0915 08:07:57.928720   61251 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0915 08:07:57.928781   61251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 08:07:57.972243   61251 cri.go:89] found id: ""
	I0915 08:07:57.972334   61251 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0915 08:07:57.993182   61251 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 08:07:58.003888   61251 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 08:07:58.003912   61251 kubeadm.go:157] found existing configuration files:
	
	I0915 08:07:58.003952   61251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 08:07:58.013922   61251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 08:07:58.013987   61251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 08:07:58.023583   61251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 08:07:58.033413   61251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 08:07:58.033468   61251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 08:07:58.043193   61251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 08:07:58.052687   61251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 08:07:58.052734   61251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 08:07:58.062104   61251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 08:07:58.071946   61251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 08:07:58.071996   61251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 08:07:58.081340   61251 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 08:07:58.091264   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:58.203195   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:59.245699   61251 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.042468083s)
	I0915 08:07:59.245748   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:59.463327   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:59.527688   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:07:59.650251   61251 api_server.go:52] waiting for apiserver process to appear ...
	I0915 08:07:59.650344   61251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:00.151306   61251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:00.650714   61251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:00.670628   61251 api_server.go:72] duration metric: took 1.020373547s to wait for apiserver process to appear ...
	I0915 08:08:00.670657   61251 api_server.go:88] waiting for apiserver healthz status ...
	I0915 08:08:00.670682   61251 api_server.go:253] Checking apiserver healthz at https://192.168.61.247:8443/healthz ...
	I0915 08:08:00.671235   61251 api_server.go:269] stopped: https://192.168.61.247:8443/healthz: Get "https://192.168.61.247:8443/healthz": dial tcp 192.168.61.247:8443: connect: connection refused
	I0915 08:08:01.171134   61251 api_server.go:253] Checking apiserver healthz at https://192.168.61.247:8443/healthz ...
	I0915 08:07:58.387687   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:58.888681   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:59.388273   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:07:59.887692   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:00.387998   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:00.887938   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:01.387832   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:01.887867   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:02.388581   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:02.887686   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:03.634400   61251 api_server.go:279] https://192.168.61.247:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0915 08:08:03.634428   61251 api_server.go:103] status: https://192.168.61.247:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0915 08:08:03.634445   61251 api_server.go:253] Checking apiserver healthz at https://192.168.61.247:8443/healthz ...
	I0915 08:08:03.709752   61251 api_server.go:279] https://192.168.61.247:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0915 08:08:03.709789   61251 api_server.go:103] status: https://192.168.61.247:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0915 08:08:03.709822   61251 api_server.go:253] Checking apiserver healthz at https://192.168.61.247:8443/healthz ...
	I0915 08:08:03.760177   61251 api_server.go:279] https://192.168.61.247:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0915 08:08:03.760205   61251 api_server.go:103] status: https://192.168.61.247:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0915 08:08:04.171709   61251 api_server.go:253] Checking apiserver healthz at https://192.168.61.247:8443/healthz ...
	I0915 08:08:04.176370   61251 api_server.go:279] https://192.168.61.247:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0915 08:08:04.176400   61251 api_server.go:103] status: https://192.168.61.247:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0915 08:08:04.671024   61251 api_server.go:253] Checking apiserver healthz at https://192.168.61.247:8443/healthz ...
	I0915 08:08:04.675081   61251 api_server.go:279] https://192.168.61.247:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0915 08:08:04.675107   61251 api_server.go:103] status: https://192.168.61.247:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0915 08:08:05.171752   61251 api_server.go:253] Checking apiserver healthz at https://192.168.61.247:8443/healthz ...
	I0915 08:08:05.176476   61251 api_server.go:279] https://192.168.61.247:8443/healthz returned 200:
	ok
	I0915 08:08:05.183534   61251 api_server.go:141] control plane version: v1.31.1
	I0915 08:08:05.183561   61251 api_server.go:131] duration metric: took 4.512897123s to wait for apiserver health ...
	I0915 08:08:05.183569   61251 cni.go:84] Creating CNI manager for ""
	I0915 08:08:05.183576   61251 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 08:08:05.185327   61251 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 08:08:01.018988   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:03.512104   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:05.512212   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:05.186714   61251 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 08:08:05.198342   61251 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 08:08:05.219458   61251 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 08:08:05.232070   61251 system_pods.go:59] 8 kube-system pods found
	I0915 08:08:05.232126   61251 system_pods.go:61] "coredns-7c65d6cfc9-xbvrd" [271712fa-0fe3-44f3-898f-12e5a30d3a79] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0915 08:08:05.232144   61251 system_pods.go:61] "etcd-no-preload-778087" [4efdc0ef-ba7b-4090-82b7-8d2cb35aab39] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0915 08:08:05.232156   61251 system_pods.go:61] "kube-apiserver-no-preload-778087" [d06944b2-19bf-4d6a-b862-69e28d8d3991] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0915 08:08:05.232176   61251 system_pods.go:61] "kube-controller-manager-no-preload-778087" [59bfb273-2f4f-4cf1-ae8e-6398c92b6d81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0915 08:08:05.232185   61251 system_pods.go:61] "kube-proxy-2qg9r" [c34dcf5b-b172-4c9a-b7b5-6fb43564df4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0915 08:08:05.232194   61251 system_pods.go:61] "kube-scheduler-no-preload-778087" [1978dd3a-2bae-45dd-8e81-acb164693b70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0915 08:08:05.232202   61251 system_pods.go:61] "metrics-server-6867b74b74-d5nzc" [4ce62161-4931-423a-9d68-c17512ec80ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 08:08:05.232210   61251 system_pods.go:61] "storage-provisioner" [22a8e26f-7033-49e1-8e14-8d4bd03822d3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0915 08:08:05.232229   61251 system_pods.go:74] duration metric: took 12.738151ms to wait for pod list to return data ...
	I0915 08:08:05.232241   61251 node_conditions.go:102] verifying NodePressure condition ...
	I0915 08:08:05.236202   61251 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 08:08:05.236232   61251 node_conditions.go:123] node cpu capacity is 2
	I0915 08:08:05.236247   61251 node_conditions.go:105] duration metric: took 4.000375ms to run NodePressure ...
	I0915 08:08:05.236269   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 08:08:05.514787   61251 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0915 08:08:05.518840   61251 kubeadm.go:739] kubelet initialised
	I0915 08:08:05.518858   61251 kubeadm.go:740] duration metric: took 4.05072ms waiting for restarted kubelet to initialise ...
	I0915 08:08:05.518865   61251 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 08:08:05.523374   61251 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xbvrd" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:05.527724   61251 pod_ready.go:98] node "no-preload-778087" hosting pod "coredns-7c65d6cfc9-xbvrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:05.527749   61251 pod_ready.go:82] duration metric: took 4.355221ms for pod "coredns-7c65d6cfc9-xbvrd" in "kube-system" namespace to be "Ready" ...
	E0915 08:08:05.527758   61251 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-778087" hosting pod "coredns-7c65d6cfc9-xbvrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:05.527768   61251 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:05.533136   61251 pod_ready.go:98] node "no-preload-778087" hosting pod "etcd-no-preload-778087" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:05.533156   61251 pod_ready.go:82] duration metric: took 5.381689ms for pod "etcd-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	E0915 08:08:05.533163   61251 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-778087" hosting pod "etcd-no-preload-778087" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:05.533169   61251 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:05.537008   61251 pod_ready.go:98] node "no-preload-778087" hosting pod "kube-apiserver-no-preload-778087" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:05.537025   61251 pod_ready.go:82] duration metric: took 3.848343ms for pod "kube-apiserver-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	E0915 08:08:05.537032   61251 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-778087" hosting pod "kube-apiserver-no-preload-778087" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:05.537038   61251 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:05.623757   61251 pod_ready.go:98] node "no-preload-778087" hosting pod "kube-controller-manager-no-preload-778087" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:05.623788   61251 pod_ready.go:82] duration metric: took 86.741377ms for pod "kube-controller-manager-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	E0915 08:08:05.623801   61251 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-778087" hosting pod "kube-controller-manager-no-preload-778087" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:05.623809   61251 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2qg9r" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:06.023747   61251 pod_ready.go:98] node "no-preload-778087" hosting pod "kube-proxy-2qg9r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:06.023771   61251 pod_ready.go:82] duration metric: took 399.9522ms for pod "kube-proxy-2qg9r" in "kube-system" namespace to be "Ready" ...
	E0915 08:08:06.023779   61251 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-778087" hosting pod "kube-proxy-2qg9r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:06.023787   61251 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:06.422953   61251 pod_ready.go:98] node "no-preload-778087" hosting pod "kube-scheduler-no-preload-778087" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:06.422980   61251 pod_ready.go:82] duration metric: took 399.186059ms for pod "kube-scheduler-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	E0915 08:08:06.422992   61251 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-778087" hosting pod "kube-scheduler-no-preload-778087" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:06.423002   61251 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:06.823554   61251 pod_ready.go:98] node "no-preload-778087" hosting pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:06.823581   61251 pod_ready.go:82] duration metric: took 400.56948ms for pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace to be "Ready" ...
	E0915 08:08:06.823593   61251 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-778087" hosting pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:06.823603   61251 pod_ready.go:39] duration metric: took 1.30472923s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 08:08:06.823624   61251 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 08:08:06.835974   61251 ops.go:34] apiserver oom_adj: -16
	I0915 08:08:06.836002   61251 kubeadm.go:597] duration metric: took 8.931188689s to restartPrimaryControlPlane
	I0915 08:08:06.836013   61251 kubeadm.go:394] duration metric: took 8.983137397s to StartCluster
	I0915 08:08:06.836035   61251 settings.go:142] acquiring lock: {Name:mkf5235d72fa0db4ee272126c244284fe5de298a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 08:08:06.836118   61251 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 08:08:06.837724   61251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/kubeconfig: {Name:mk08ce8dc701ab8f3c73b1f6ae730da0fbf561bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 08:08:06.837998   61251 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.247 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 08:08:06.838067   61251 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0915 08:08:06.838167   61251 addons.go:69] Setting storage-provisioner=true in profile "no-preload-778087"
	I0915 08:08:06.838184   61251 addons.go:234] Setting addon storage-provisioner=true in "no-preload-778087"
	W0915 08:08:06.838195   61251 addons.go:243] addon storage-provisioner should already be in state true
	I0915 08:08:06.838203   61251 addons.go:69] Setting default-storageclass=true in profile "no-preload-778087"
	I0915 08:08:06.838224   61251 host.go:66] Checking if "no-preload-778087" exists ...
	I0915 08:08:06.838226   61251 config.go:182] Loaded profile config "no-preload-778087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 08:08:06.838225   61251 addons.go:69] Setting metrics-server=true in profile "no-preload-778087"
	I0915 08:08:06.838238   61251 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-778087"
	I0915 08:08:06.838243   61251 addons.go:234] Setting addon metrics-server=true in "no-preload-778087"
	W0915 08:08:06.838349   61251 addons.go:243] addon metrics-server should already be in state true
	I0915 08:08:06.838380   61251 host.go:66] Checking if "no-preload-778087" exists ...
	I0915 08:08:06.838558   61251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:08:06.838595   61251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:08:06.838599   61251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:08:06.838634   61251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:08:06.838719   61251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:08:06.838758   61251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:08:06.840725   61251 out.go:177] * Verifying Kubernetes components...
	I0915 08:08:06.842123   61251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 08:08:06.855538   61251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38949
	I0915 08:08:06.855569   61251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45147
	I0915 08:08:06.856013   61251 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:08:06.856028   61251 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:08:06.856524   61251 main.go:141] libmachine: Using API Version  1
	I0915 08:08:06.856540   61251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:08:06.856621   61251 main.go:141] libmachine: Using API Version  1
	I0915 08:08:06.856641   61251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:08:06.856844   61251 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:08:06.856937   61251 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:08:06.857090   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetState
	I0915 08:08:06.857366   61251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:08:06.857413   61251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:08:06.860668   61251 addons.go:234] Setting addon default-storageclass=true in "no-preload-778087"
	W0915 08:08:06.860690   61251 addons.go:243] addon default-storageclass should already be in state true
	I0915 08:08:06.860720   61251 host.go:66] Checking if "no-preload-778087" exists ...
	I0915 08:08:06.861073   61251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:08:06.861128   61251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:08:06.873918   61251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40155
	I0915 08:08:06.874369   61251 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:08:06.874957   61251 main.go:141] libmachine: Using API Version  1
	I0915 08:08:06.874983   61251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:08:06.875351   61251 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:08:06.875557   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetState
	I0915 08:08:06.877596   61251 main.go:141] libmachine: (no-preload-778087) Calling .DriverName
	I0915 08:08:06.878584   61251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
	I0915 08:08:06.879046   61251 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:08:06.879645   61251 main.go:141] libmachine: Using API Version  1
	I0915 08:08:06.879670   61251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:08:06.879777   61251 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 08:08:06.879976   61251 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:08:06.880577   61251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:08:06.880625   61251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:08:06.881235   61251 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 08:08:06.881254   61251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 08:08:06.881274   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHHostname
	I0915 08:08:06.882528   61251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45683
	I0915 08:08:06.882930   61251 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:08:06.883802   61251 main.go:141] libmachine: Using API Version  1
	I0915 08:08:06.883826   61251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:08:06.884126   61251 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:08:06.884695   61251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 08:08:06.884738   61251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 08:08:06.885201   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:08:06.885710   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:08:06.885737   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:08:06.886018   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHPort
	I0915 08:08:06.886235   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:08:06.886409   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHUsername
	I0915 08:08:06.886569   61251 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/no-preload-778087/id_rsa Username:docker}
	I0915 08:08:06.897845   61251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I0915 08:08:06.898357   61251 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:08:06.898824   61251 main.go:141] libmachine: Using API Version  1
	I0915 08:08:06.898839   61251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:08:06.899144   61251 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:08:06.899255   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetState
	I0915 08:08:06.900950   61251 main.go:141] libmachine: (no-preload-778087) Calling .DriverName
	I0915 08:08:06.902686   61251 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0915 08:08:06.903805   61251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37027
	I0915 08:08:06.904319   61251 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 08:08:06.904339   61251 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 08:08:06.904358   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHHostname
	I0915 08:08:06.904923   61251 main.go:141] libmachine: () Calling .GetVersion
	I0915 08:08:06.905799   61251 main.go:141] libmachine: Using API Version  1
	I0915 08:08:06.905841   61251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 08:08:06.906283   61251 main.go:141] libmachine: () Calling .GetMachineName
	I0915 08:08:06.906495   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetState
	I0915 08:08:06.907856   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:08:06.908343   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:08:06.908382   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:08:06.908532   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHPort
	I0915 08:08:06.908743   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:08:06.908889   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHUsername
	I0915 08:08:06.908997   61251 main.go:141] libmachine: (no-preload-778087) Calling .DriverName
	I0915 08:08:06.909063   61251 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/no-preload-778087/id_rsa Username:docker}
	I0915 08:08:06.909397   61251 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 08:08:06.909410   61251 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 08:08:06.909424   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHHostname
	I0915 08:08:06.911645   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:08:06.911903   61251 main.go:141] libmachine: (no-preload-778087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:53:6a", ip: ""} in network mk-no-preload-778087: {Iface:virbr4 ExpiryTime:2024-09-15 09:07:32 +0000 UTC Type:0 Mac:52:54:00:48:53:6a Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:no-preload-778087 Clientid:01:52:54:00:48:53:6a}
	I0915 08:08:06.911920   61251 main.go:141] libmachine: (no-preload-778087) DBG | domain no-preload-778087 has defined IP address 192.168.61.247 and MAC address 52:54:00:48:53:6a in network mk-no-preload-778087
	I0915 08:08:06.912064   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHPort
	I0915 08:08:06.912189   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHKeyPath
	I0915 08:08:06.912270   61251 main.go:141] libmachine: (no-preload-778087) Calling .GetSSHUsername
	I0915 08:08:06.912363   61251 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/no-preload-778087/id_rsa Username:docker}
	I0915 08:08:07.088854   61251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 08:08:07.108221   61251 node_ready.go:35] waiting up to 6m0s for node "no-preload-778087" to be "Ready" ...
	I0915 08:08:07.212256   61251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 08:08:07.287185   61251 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 08:08:07.287214   61251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0915 08:08:07.316915   61251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 08:08:07.333657   61251 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 08:08:07.333689   61251 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 08:08:07.388934   61251 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 08:08:07.388956   61251 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 08:08:07.440272   61251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 08:08:03.388407   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:03.887678   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:04.388070   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:04.888521   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:05.387949   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:05.888097   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:06.387707   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:06.887781   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:07.387759   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:07.888332   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:08.401647   61251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.189353578s)
	I0915 08:08:08.401690   61251 main.go:141] libmachine: Making call to close driver server
	I0915 08:08:08.401717   61251 main.go:141] libmachine: (no-preload-778087) Calling .Close
	I0915 08:08:08.401718   61251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.08476905s)
	I0915 08:08:08.401759   61251 main.go:141] libmachine: Making call to close driver server
	I0915 08:08:08.401781   61251 main.go:141] libmachine: (no-preload-778087) Calling .Close
	I0915 08:08:08.402153   61251 main.go:141] libmachine: Successfully made call to close driver server
	I0915 08:08:08.402171   61251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 08:08:08.402180   61251 main.go:141] libmachine: Making call to close driver server
	I0915 08:08:08.402201   61251 main.go:141] libmachine: (no-preload-778087) DBG | Closing plugin on server side
	I0915 08:08:08.402234   61251 main.go:141] libmachine: Successfully made call to close driver server
	I0915 08:08:08.402264   61251 main.go:141] libmachine: (no-preload-778087) Calling .Close
	I0915 08:08:08.402281   61251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 08:08:08.402291   61251 main.go:141] libmachine: Making call to close driver server
	I0915 08:08:08.402303   61251 main.go:141] libmachine: (no-preload-778087) Calling .Close
	I0915 08:08:08.402236   61251 main.go:141] libmachine: (no-preload-778087) DBG | Closing plugin on server side
	I0915 08:08:08.402489   61251 main.go:141] libmachine: Successfully made call to close driver server
	I0915 08:08:08.402549   61251 main.go:141] libmachine: Successfully made call to close driver server
	I0915 08:08:08.402555   61251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 08:08:08.402557   61251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 08:08:08.402515   61251 main.go:141] libmachine: (no-preload-778087) DBG | Closing plugin on server side
	I0915 08:08:08.409104   61251 main.go:141] libmachine: Making call to close driver server
	I0915 08:08:08.409120   61251 main.go:141] libmachine: (no-preload-778087) Calling .Close
	I0915 08:08:08.409364   61251 main.go:141] libmachine: (no-preload-778087) DBG | Closing plugin on server side
	I0915 08:08:08.409414   61251 main.go:141] libmachine: Successfully made call to close driver server
	I0915 08:08:08.409428   61251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 08:08:08.493836   61251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.053504769s)
	I0915 08:08:08.493892   61251 main.go:141] libmachine: Making call to close driver server
	I0915 08:08:08.493905   61251 main.go:141] libmachine: (no-preload-778087) Calling .Close
	I0915 08:08:08.494216   61251 main.go:141] libmachine: Successfully made call to close driver server
	I0915 08:08:08.494234   61251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 08:08:08.494242   61251 main.go:141] libmachine: Making call to close driver server
	I0915 08:08:08.494249   61251 main.go:141] libmachine: (no-preload-778087) Calling .Close
	I0915 08:08:08.494454   61251 main.go:141] libmachine: Successfully made call to close driver server
	I0915 08:08:08.494467   61251 main.go:141] libmachine: Making call to close connection to plugin binary
	I0915 08:08:08.494476   61251 addons.go:475] Verifying addon metrics-server=true in "no-preload-778087"
	I0915 08:08:08.496274   61251 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0915 08:08:08.012182   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:10.510879   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:08.497594   61251 addons.go:510] duration metric: took 1.659534168s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0915 08:08:09.112202   61251 node_ready.go:53] node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:11.113284   61251 node_ready.go:53] node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:08.388492   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:08.887792   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:09.388063   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:09.888450   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:10.387931   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:10.887843   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:11.387780   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:11.888450   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:12.388145   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:12.888684   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:12.512275   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:15.014251   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:13.611740   61251 node_ready.go:53] node "no-preload-778087" has status "Ready":"False"
	I0915 08:08:14.611805   61251 node_ready.go:49] node "no-preload-778087" has status "Ready":"True"
	I0915 08:08:14.611834   61251 node_ready.go:38] duration metric: took 7.503577239s for node "no-preload-778087" to be "Ready" ...
	I0915 08:08:14.611846   61251 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 08:08:14.617428   61251 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xbvrd" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:14.621902   61251 pod_ready.go:93] pod "coredns-7c65d6cfc9-xbvrd" in "kube-system" namespace has status "Ready":"True"
	I0915 08:08:14.621927   61251 pod_ready.go:82] duration metric: took 4.472925ms for pod "coredns-7c65d6cfc9-xbvrd" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:14.621952   61251 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:16.629553   61251 pod_ready.go:103] pod "etcd-no-preload-778087" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:13.388008   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:13.888495   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:14.388192   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:14.888016   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:15.387945   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:15.887984   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:16.388310   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:16.888319   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:17.388540   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:17.887683   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:17.510660   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:19.510708   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:18.128139   61251 pod_ready.go:93] pod "etcd-no-preload-778087" in "kube-system" namespace has status "Ready":"True"
	I0915 08:08:18.128167   61251 pod_ready.go:82] duration metric: took 3.506207218s for pod "etcd-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:18.128178   61251 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:18.132350   61251 pod_ready.go:93] pod "kube-apiserver-no-preload-778087" in "kube-system" namespace has status "Ready":"True"
	I0915 08:08:18.132375   61251 pod_ready.go:82] duration metric: took 4.189709ms for pod "kube-apiserver-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:18.132387   61251 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:18.138846   61251 pod_ready.go:93] pod "kube-controller-manager-no-preload-778087" in "kube-system" namespace has status "Ready":"True"
	I0915 08:08:18.138864   61251 pod_ready.go:82] duration metric: took 6.47036ms for pod "kube-controller-manager-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:18.138873   61251 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2qg9r" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:18.144559   61251 pod_ready.go:93] pod "kube-proxy-2qg9r" in "kube-system" namespace has status "Ready":"True"
	I0915 08:08:18.144582   61251 pod_ready.go:82] duration metric: took 5.701965ms for pod "kube-proxy-2qg9r" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:18.144593   61251 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:18.212033   61251 pod_ready.go:93] pod "kube-scheduler-no-preload-778087" in "kube-system" namespace has status "Ready":"True"
	I0915 08:08:18.212055   61251 pod_ready.go:82] duration metric: took 67.454745ms for pod "kube-scheduler-no-preload-778087" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:18.212064   61251 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace to be "Ready" ...
	I0915 08:08:20.219069   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:22.219325   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:18.387707   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:18.888387   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:19.388541   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:19.888653   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:20.388154   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:20.887655   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:21.388427   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:21.888739   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:22.387718   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:22.888562   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:22.011662   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:24.510584   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:24.719165   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:26.719678   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:23.387978   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:23.888516   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:24.388008   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:24.888111   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:25.388182   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:25.888145   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:26.388017   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:26.888649   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:27.388442   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:27.887822   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:26.517471   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:29.010886   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:29.219723   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:31.719856   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:28.387999   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:28.888392   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:29.388472   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:29.888520   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:30.387954   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:30.888453   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:31.387678   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:08:31.387758   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:08:31.432209   61935 cri.go:89] found id: ""
	I0915 08:08:31.432243   61935 logs.go:276] 0 containers: []
	W0915 08:08:31.432256   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:08:31.432263   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:08:31.432329   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:08:31.467966   61935 cri.go:89] found id: ""
	I0915 08:08:31.468001   61935 logs.go:276] 0 containers: []
	W0915 08:08:31.468013   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:08:31.468021   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:08:31.468077   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:08:31.501241   61935 cri.go:89] found id: ""
	I0915 08:08:31.501271   61935 logs.go:276] 0 containers: []
	W0915 08:08:31.501281   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:08:31.501286   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:08:31.501339   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:08:31.539613   61935 cri.go:89] found id: ""
	I0915 08:08:31.539636   61935 logs.go:276] 0 containers: []
	W0915 08:08:31.539644   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:08:31.539650   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:08:31.539697   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:08:31.577749   61935 cri.go:89] found id: ""
	I0915 08:08:31.577782   61935 logs.go:276] 0 containers: []
	W0915 08:08:31.577794   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:08:31.577802   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:08:31.577880   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:08:31.615725   61935 cri.go:89] found id: ""
	I0915 08:08:31.615759   61935 logs.go:276] 0 containers: []
	W0915 08:08:31.615768   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:08:31.615774   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:08:31.615822   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:08:31.651397   61935 cri.go:89] found id: ""
	I0915 08:08:31.651419   61935 logs.go:276] 0 containers: []
	W0915 08:08:31.651432   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:08:31.651437   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:08:31.651484   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:08:31.685427   61935 cri.go:89] found id: ""
	I0915 08:08:31.685460   61935 logs.go:276] 0 containers: []
	W0915 08:08:31.685470   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:08:31.685478   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:08:31.685489   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:08:31.742585   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:08:31.742618   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:08:31.757646   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:08:31.757679   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:08:31.891079   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:08:31.891101   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:08:31.891113   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:08:31.970639   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:08:31.970672   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:08:31.511087   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:34.011281   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:33.720021   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:36.218378   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:34.517027   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:34.530469   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:08:34.530547   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:08:34.566157   61935 cri.go:89] found id: ""
	I0915 08:08:34.566188   61935 logs.go:276] 0 containers: []
	W0915 08:08:34.566199   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:08:34.566207   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:08:34.566276   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:08:34.606545   61935 cri.go:89] found id: ""
	I0915 08:08:34.606573   61935 logs.go:276] 0 containers: []
	W0915 08:08:34.606582   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:08:34.606587   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:08:34.606636   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:08:34.645946   61935 cri.go:89] found id: ""
	I0915 08:08:34.645971   61935 logs.go:276] 0 containers: []
	W0915 08:08:34.645981   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:08:34.645988   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:08:34.646050   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:08:34.686017   61935 cri.go:89] found id: ""
	I0915 08:08:34.686042   61935 logs.go:276] 0 containers: []
	W0915 08:08:34.686052   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:08:34.686060   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:08:34.686123   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:08:34.723058   61935 cri.go:89] found id: ""
	I0915 08:08:34.723087   61935 logs.go:276] 0 containers: []
	W0915 08:08:34.723130   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:08:34.723143   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:08:34.723207   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:08:34.758430   61935 cri.go:89] found id: ""
	I0915 08:08:34.758464   61935 logs.go:276] 0 containers: []
	W0915 08:08:34.758475   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:08:34.758482   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:08:34.758548   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:08:34.793725   61935 cri.go:89] found id: ""
	I0915 08:08:34.793754   61935 logs.go:276] 0 containers: []
	W0915 08:08:34.793763   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:08:34.793768   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:08:34.793842   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:08:34.829767   61935 cri.go:89] found id: ""
	I0915 08:08:34.829795   61935 logs.go:276] 0 containers: []
	W0915 08:08:34.829819   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:08:34.829830   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:08:34.829852   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:08:34.880675   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:08:34.880711   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:08:34.894638   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:08:34.894667   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:08:34.970647   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:08:34.970675   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:08:34.970696   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:08:35.055903   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:08:35.055948   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:08:37.631153   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:37.645718   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:08:37.645789   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:08:37.681574   61935 cri.go:89] found id: ""
	I0915 08:08:37.681660   61935 logs.go:276] 0 containers: []
	W0915 08:08:37.681678   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:08:37.681684   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:08:37.681735   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:08:37.721500   61935 cri.go:89] found id: ""
	I0915 08:08:37.721528   61935 logs.go:276] 0 containers: []
	W0915 08:08:37.721539   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:08:37.721546   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:08:37.721623   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:08:37.759098   61935 cri.go:89] found id: ""
	I0915 08:08:37.759145   61935 logs.go:276] 0 containers: []
	W0915 08:08:37.759156   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:08:37.759164   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:08:37.759223   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:08:37.794560   61935 cri.go:89] found id: ""
	I0915 08:08:37.794585   61935 logs.go:276] 0 containers: []
	W0915 08:08:37.794596   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:08:37.794603   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:08:37.794668   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:08:37.829460   61935 cri.go:89] found id: ""
	I0915 08:08:37.829484   61935 logs.go:276] 0 containers: []
	W0915 08:08:37.829494   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:08:37.829501   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:08:37.829562   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:08:37.863959   61935 cri.go:89] found id: ""
	I0915 08:08:37.863982   61935 logs.go:276] 0 containers: []
	W0915 08:08:37.863990   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:08:37.864012   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:08:37.864073   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:08:36.011403   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:38.012642   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:40.510943   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:38.219968   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:40.718998   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:42.720469   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:37.908026   61935 cri.go:89] found id: ""
	I0915 08:08:37.908053   61935 logs.go:276] 0 containers: []
	W0915 08:08:37.908064   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:08:37.908071   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:08:37.908132   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:08:37.947023   61935 cri.go:89] found id: ""
	I0915 08:08:37.947045   61935 logs.go:276] 0 containers: []
	W0915 08:08:37.947053   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:08:37.947061   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:08:37.947071   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:08:37.999770   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:08:37.999801   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:08:38.015990   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:08:38.016013   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:08:38.088856   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:08:38.088878   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:08:38.088892   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:08:38.175562   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:08:38.175601   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:08:40.721877   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:40.736901   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:08:40.736957   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:08:40.773066   61935 cri.go:89] found id: ""
	I0915 08:08:40.773092   61935 logs.go:276] 0 containers: []
	W0915 08:08:40.773104   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:08:40.773112   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:08:40.773189   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:08:40.808177   61935 cri.go:89] found id: ""
	I0915 08:08:40.808202   61935 logs.go:276] 0 containers: []
	W0915 08:08:40.808210   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:08:40.808216   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:08:40.808267   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:08:40.850306   61935 cri.go:89] found id: ""
	I0915 08:08:40.850332   61935 logs.go:276] 0 containers: []
	W0915 08:08:40.850342   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:08:40.850350   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:08:40.850402   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:08:40.895287   61935 cri.go:89] found id: ""
	I0915 08:08:40.895314   61935 logs.go:276] 0 containers: []
	W0915 08:08:40.895322   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:08:40.895328   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:08:40.895387   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:08:40.951090   61935 cri.go:89] found id: ""
	I0915 08:08:40.951118   61935 logs.go:276] 0 containers: []
	W0915 08:08:40.951130   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:08:40.951137   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:08:40.951215   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:08:40.985235   61935 cri.go:89] found id: ""
	I0915 08:08:40.985258   61935 logs.go:276] 0 containers: []
	W0915 08:08:40.985268   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:08:40.985276   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:08:40.985335   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:08:41.019732   61935 cri.go:89] found id: ""
	I0915 08:08:41.019755   61935 logs.go:276] 0 containers: []
	W0915 08:08:41.019765   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:08:41.019772   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:08:41.019830   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:08:41.053414   61935 cri.go:89] found id: ""
	I0915 08:08:41.053438   61935 logs.go:276] 0 containers: []
	W0915 08:08:41.053446   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:08:41.053459   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:08:41.053469   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:08:41.107747   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:08:41.107784   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:08:41.121354   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:08:41.121380   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:08:41.195145   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:08:41.195170   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:08:41.195194   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:08:41.274097   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:08:41.274131   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:08:42.511135   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:45.011463   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:45.218992   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:47.719392   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:43.816541   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:43.829703   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:08:43.829764   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:08:43.864976   61935 cri.go:89] found id: ""
	I0915 08:08:43.865001   61935 logs.go:276] 0 containers: []
	W0915 08:08:43.865009   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:08:43.865014   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:08:43.865061   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:08:43.899520   61935 cri.go:89] found id: ""
	I0915 08:08:43.899546   61935 logs.go:276] 0 containers: []
	W0915 08:08:43.899557   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:08:43.899562   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:08:43.899608   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:08:43.931046   61935 cri.go:89] found id: ""
	I0915 08:08:43.931075   61935 logs.go:276] 0 containers: []
	W0915 08:08:43.931086   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:08:43.931095   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:08:43.931154   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:08:43.967162   61935 cri.go:89] found id: ""
	I0915 08:08:43.967199   61935 logs.go:276] 0 containers: []
	W0915 08:08:43.967210   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:08:43.967217   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:08:43.967279   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:08:44.002241   61935 cri.go:89] found id: ""
	I0915 08:08:44.002273   61935 logs.go:276] 0 containers: []
	W0915 08:08:44.002285   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:08:44.002293   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:08:44.002349   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:08:44.039121   61935 cri.go:89] found id: ""
	I0915 08:08:44.039154   61935 logs.go:276] 0 containers: []
	W0915 08:08:44.039165   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:08:44.039173   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:08:44.039239   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:08:44.074861   61935 cri.go:89] found id: ""
	I0915 08:08:44.074888   61935 logs.go:276] 0 containers: []
	W0915 08:08:44.074899   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:08:44.074906   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:08:44.074967   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:08:44.112045   61935 cri.go:89] found id: ""
	I0915 08:08:44.112076   61935 logs.go:276] 0 containers: []
	W0915 08:08:44.112087   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:08:44.112095   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:08:44.112105   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:08:44.164477   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:08:44.164513   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:08:44.179341   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:08:44.179372   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:08:44.256422   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:08:44.256441   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:08:44.256453   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:08:44.338125   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:08:44.338156   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:08:46.879061   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:46.893786   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:08:46.893867   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:08:46.926899   61935 cri.go:89] found id: ""
	I0915 08:08:46.926924   61935 logs.go:276] 0 containers: []
	W0915 08:08:46.926932   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:08:46.926938   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:08:46.926987   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:08:46.966428   61935 cri.go:89] found id: ""
	I0915 08:08:46.966450   61935 logs.go:276] 0 containers: []
	W0915 08:08:46.966459   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:08:46.966464   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:08:46.966515   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:08:47.003812   61935 cri.go:89] found id: ""
	I0915 08:08:47.003838   61935 logs.go:276] 0 containers: []
	W0915 08:08:47.003849   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:08:47.003856   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:08:47.003914   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:08:47.038371   61935 cri.go:89] found id: ""
	I0915 08:08:47.038398   61935 logs.go:276] 0 containers: []
	W0915 08:08:47.038408   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:08:47.038415   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:08:47.038484   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:08:47.076392   61935 cri.go:89] found id: ""
	I0915 08:08:47.076423   61935 logs.go:276] 0 containers: []
	W0915 08:08:47.076433   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:08:47.076447   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:08:47.076523   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:08:47.110256   61935 cri.go:89] found id: ""
	I0915 08:08:47.110286   61935 logs.go:276] 0 containers: []
	W0915 08:08:47.110296   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:08:47.110303   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:08:47.110367   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:08:47.143828   61935 cri.go:89] found id: ""
	I0915 08:08:47.143856   61935 logs.go:276] 0 containers: []
	W0915 08:08:47.143864   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:08:47.143870   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:08:47.143923   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:08:47.178027   61935 cri.go:89] found id: ""
	I0915 08:08:47.178052   61935 logs.go:276] 0 containers: []
	W0915 08:08:47.178060   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:08:47.178067   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:08:47.178079   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:08:47.253587   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:08:47.253620   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:08:47.300745   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:08:47.300773   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:08:47.354106   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:08:47.354142   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:08:47.367609   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:08:47.367631   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:08:47.438452   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:08:47.510772   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:49.510936   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:49.720768   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:52.218388   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:49.939554   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:49.952467   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:08:49.952528   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:08:49.986555   61935 cri.go:89] found id: ""
	I0915 08:08:49.986586   61935 logs.go:276] 0 containers: []
	W0915 08:08:49.986597   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:08:49.986605   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:08:49.986674   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:08:50.021012   61935 cri.go:89] found id: ""
	I0915 08:08:50.021037   61935 logs.go:276] 0 containers: []
	W0915 08:08:50.021045   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:08:50.021050   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:08:50.021115   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:08:50.053980   61935 cri.go:89] found id: ""
	I0915 08:08:50.054005   61935 logs.go:276] 0 containers: []
	W0915 08:08:50.054013   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:08:50.054018   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:08:50.054073   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:08:50.087620   61935 cri.go:89] found id: ""
	I0915 08:08:50.087652   61935 logs.go:276] 0 containers: []
	W0915 08:08:50.087664   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:08:50.087671   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:08:50.087735   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:08:50.121157   61935 cri.go:89] found id: ""
	I0915 08:08:50.121186   61935 logs.go:276] 0 containers: []
	W0915 08:08:50.121195   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:08:50.121201   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:08:50.121263   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:08:50.154655   61935 cri.go:89] found id: ""
	I0915 08:08:50.154681   61935 logs.go:276] 0 containers: []
	W0915 08:08:50.154689   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:08:50.154695   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:08:50.154743   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:08:50.192370   61935 cri.go:89] found id: ""
	I0915 08:08:50.192401   61935 logs.go:276] 0 containers: []
	W0915 08:08:50.192413   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:08:50.192421   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:08:50.192480   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:08:50.226592   61935 cri.go:89] found id: ""
	I0915 08:08:50.226622   61935 logs.go:276] 0 containers: []
	W0915 08:08:50.226631   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:08:50.226639   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:08:50.226650   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:08:50.277633   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:08:50.277662   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:08:50.290305   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:08:50.290329   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:08:50.358553   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:08:50.358574   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:08:50.358585   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:08:50.435272   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:08:50.435303   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:08:51.511326   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:53.512431   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:54.218965   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:56.718695   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:52.972202   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:52.986029   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:08:52.986131   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:08:53.029948   61935 cri.go:89] found id: ""
	I0915 08:08:53.029977   61935 logs.go:276] 0 containers: []
	W0915 08:08:53.029990   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:08:53.029998   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:08:53.030055   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:08:53.067857   61935 cri.go:89] found id: ""
	I0915 08:08:53.067884   61935 logs.go:276] 0 containers: []
	W0915 08:08:53.067894   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:08:53.067901   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:08:53.067967   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:08:53.103458   61935 cri.go:89] found id: ""
	I0915 08:08:53.103500   61935 logs.go:276] 0 containers: []
	W0915 08:08:53.103511   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:08:53.103518   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:08:53.103588   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:08:53.138814   61935 cri.go:89] found id: ""
	I0915 08:08:53.138907   61935 logs.go:276] 0 containers: []
	W0915 08:08:53.138936   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:08:53.138949   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:08:53.139012   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:08:53.174243   61935 cri.go:89] found id: ""
	I0915 08:08:53.174288   61935 logs.go:276] 0 containers: []
	W0915 08:08:53.174300   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:08:53.174306   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:08:53.174366   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:08:53.209202   61935 cri.go:89] found id: ""
	I0915 08:08:53.209230   61935 logs.go:276] 0 containers: []
	W0915 08:08:53.209242   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:08:53.209253   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:08:53.209340   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:08:53.246533   61935 cri.go:89] found id: ""
	I0915 08:08:53.246564   61935 logs.go:276] 0 containers: []
	W0915 08:08:53.246574   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:08:53.246582   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:08:53.246658   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:08:53.283354   61935 cri.go:89] found id: ""
	I0915 08:08:53.283377   61935 logs.go:276] 0 containers: []
	W0915 08:08:53.283385   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:08:53.283392   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:08:53.283403   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:08:53.337021   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:08:53.337061   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:08:53.351581   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:08:53.351609   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:08:53.433086   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:08:53.433107   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:08:53.433119   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:08:53.514638   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:08:53.514668   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:08:56.056277   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:56.069824   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:08:56.069887   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:08:56.109012   61935 cri.go:89] found id: ""
	I0915 08:08:56.109040   61935 logs.go:276] 0 containers: []
	W0915 08:08:56.109051   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:08:56.109059   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:08:56.109122   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:08:56.145520   61935 cri.go:89] found id: ""
	I0915 08:08:56.145551   61935 logs.go:276] 0 containers: []
	W0915 08:08:56.145563   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:08:56.145570   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:08:56.145632   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:08:56.182847   61935 cri.go:89] found id: ""
	I0915 08:08:56.182878   61935 logs.go:276] 0 containers: []
	W0915 08:08:56.182891   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:08:56.182899   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:08:56.182961   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:08:56.219545   61935 cri.go:89] found id: ""
	I0915 08:08:56.219568   61935 logs.go:276] 0 containers: []
	W0915 08:08:56.219574   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:08:56.219580   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:08:56.219626   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:08:56.255117   61935 cri.go:89] found id: ""
	I0915 08:08:56.255143   61935 logs.go:276] 0 containers: []
	W0915 08:08:56.255150   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:08:56.255155   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:08:56.255220   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:08:56.290529   61935 cri.go:89] found id: ""
	I0915 08:08:56.290555   61935 logs.go:276] 0 containers: []
	W0915 08:08:56.290563   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:08:56.290568   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:08:56.290617   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:08:56.326050   61935 cri.go:89] found id: ""
	I0915 08:08:56.326078   61935 logs.go:276] 0 containers: []
	W0915 08:08:56.326090   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:08:56.326097   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:08:56.326152   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:08:56.367108   61935 cri.go:89] found id: ""
	I0915 08:08:56.367133   61935 logs.go:276] 0 containers: []
	W0915 08:08:56.367142   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:08:56.367154   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:08:56.367165   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:08:56.451390   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:08:56.451429   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:08:56.489253   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:08:56.489277   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:08:56.542779   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:08:56.542810   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:08:56.556318   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:08:56.556348   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:08:56.627578   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:08:56.011082   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:58.510808   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:00.514962   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:58.720481   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:01.219660   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:08:59.128488   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:08:59.143451   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:08:59.143513   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:08:59.186468   61935 cri.go:89] found id: ""
	I0915 08:08:59.186492   61935 logs.go:276] 0 containers: []
	W0915 08:08:59.186500   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:08:59.186506   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:08:59.186567   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:08:59.225545   61935 cri.go:89] found id: ""
	I0915 08:08:59.225571   61935 logs.go:276] 0 containers: []
	W0915 08:08:59.225582   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:08:59.225591   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:08:59.225640   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:08:59.260234   61935 cri.go:89] found id: ""
	I0915 08:08:59.260268   61935 logs.go:276] 0 containers: []
	W0915 08:08:59.260279   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:08:59.260286   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:08:59.260349   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:08:59.300925   61935 cri.go:89] found id: ""
	I0915 08:08:59.300955   61935 logs.go:276] 0 containers: []
	W0915 08:08:59.300965   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:08:59.300973   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:08:59.301034   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:08:59.336180   61935 cri.go:89] found id: ""
	I0915 08:08:59.336205   61935 logs.go:276] 0 containers: []
	W0915 08:08:59.336215   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:08:59.336222   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:08:59.336283   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:08:59.373565   61935 cri.go:89] found id: ""
	I0915 08:08:59.373594   61935 logs.go:276] 0 containers: []
	W0915 08:08:59.373602   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:08:59.373607   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:08:59.373657   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:08:59.408361   61935 cri.go:89] found id: ""
	I0915 08:08:59.408391   61935 logs.go:276] 0 containers: []
	W0915 08:08:59.408400   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:08:59.408408   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:08:59.408462   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:08:59.443425   61935 cri.go:89] found id: ""
	I0915 08:08:59.443454   61935 logs.go:276] 0 containers: []
	W0915 08:08:59.443465   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:08:59.443475   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:08:59.443489   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:08:59.497342   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:08:59.497379   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:08:59.511662   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:08:59.511689   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:08:59.582080   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:08:59.582104   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:08:59.582116   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:08:59.671260   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:08:59.671303   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:02.216507   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:02.230461   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:02.230519   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:02.268150   61935 cri.go:89] found id: ""
	I0915 08:09:02.268183   61935 logs.go:276] 0 containers: []
	W0915 08:09:02.268192   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:02.268199   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:02.268255   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:02.302417   61935 cri.go:89] found id: ""
	I0915 08:09:02.302452   61935 logs.go:276] 0 containers: []
	W0915 08:09:02.302464   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:02.302471   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:02.302532   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:02.337595   61935 cri.go:89] found id: ""
	I0915 08:09:02.337627   61935 logs.go:276] 0 containers: []
	W0915 08:09:02.337637   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:02.337643   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:02.337691   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:02.378259   61935 cri.go:89] found id: ""
	I0915 08:09:02.378288   61935 logs.go:276] 0 containers: []
	W0915 08:09:02.378296   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:02.378302   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:02.378349   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:02.413584   61935 cri.go:89] found id: ""
	I0915 08:09:02.413616   61935 logs.go:276] 0 containers: []
	W0915 08:09:02.413627   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:02.413634   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:02.413700   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:02.451407   61935 cri.go:89] found id: ""
	I0915 08:09:02.451447   61935 logs.go:276] 0 containers: []
	W0915 08:09:02.451460   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:02.451467   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:02.451546   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:02.486982   61935 cri.go:89] found id: ""
	I0915 08:09:02.487008   61935 logs.go:276] 0 containers: []
	W0915 08:09:02.487015   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:02.487021   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:02.487068   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:02.523234   61935 cri.go:89] found id: ""
	I0915 08:09:02.523255   61935 logs.go:276] 0 containers: []
	W0915 08:09:02.523263   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:02.523270   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:02.523281   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:02.574620   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:02.574654   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:02.588319   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:02.588362   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:02.661717   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:02.661743   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:02.661756   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:02.743103   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:02.743155   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:03.010188   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:05.011294   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:03.717859   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:05.718937   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:05.281439   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:05.294548   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:05.294610   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:05.328250   61935 cri.go:89] found id: ""
	I0915 08:09:05.328274   61935 logs.go:276] 0 containers: []
	W0915 08:09:05.328285   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:05.328292   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:05.328346   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:05.364061   61935 cri.go:89] found id: ""
	I0915 08:09:05.364086   61935 logs.go:276] 0 containers: []
	W0915 08:09:05.364097   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:05.364104   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:05.364168   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:05.399031   61935 cri.go:89] found id: ""
	I0915 08:09:05.399061   61935 logs.go:276] 0 containers: []
	W0915 08:09:05.399070   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:05.399076   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:05.399122   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:05.432949   61935 cri.go:89] found id: ""
	I0915 08:09:05.432982   61935 logs.go:276] 0 containers: []
	W0915 08:09:05.432994   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:05.433001   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:05.433074   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:05.470036   61935 cri.go:89] found id: ""
	I0915 08:09:05.470061   61935 logs.go:276] 0 containers: []
	W0915 08:09:05.470069   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:05.470075   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:05.470121   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:05.505317   61935 cri.go:89] found id: ""
	I0915 08:09:05.505342   61935 logs.go:276] 0 containers: []
	W0915 08:09:05.505350   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:05.505356   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:05.505410   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:05.538595   61935 cri.go:89] found id: ""
	I0915 08:09:05.538622   61935 logs.go:276] 0 containers: []
	W0915 08:09:05.538633   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:05.538640   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:05.538701   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:05.573478   61935 cri.go:89] found id: ""
	I0915 08:09:05.573504   61935 logs.go:276] 0 containers: []
	W0915 08:09:05.573512   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:05.573522   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:05.573537   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:05.611465   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:05.611493   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:05.672777   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:05.672812   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:05.685984   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:05.686011   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:05.759301   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:05.759323   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:05.759337   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:07.020932   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:09.510899   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:09.224287   60028 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.001736938s
	I0915 08:09:09.224329   60028 kubeadm.go:310] 
	I0915 08:09:09.224387   60028 kubeadm.go:310] Unfortunately, an error has occurred:
	I0915 08:09:09.224437   60028 kubeadm.go:310] 	context deadline exceeded
	I0915 08:09:09.224457   60028 kubeadm.go:310] 
	I0915 08:09:09.224514   60028 kubeadm.go:310] This error is likely caused by:
	I0915 08:09:09.224548   60028 kubeadm.go:310] 	- The kubelet is not running
	I0915 08:09:09.224692   60028 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0915 08:09:09.224706   60028 kubeadm.go:310] 
	I0915 08:09:09.224854   60028 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0915 08:09:09.224914   60028 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0915 08:09:09.224954   60028 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0915 08:09:09.224961   60028 kubeadm.go:310] 
	I0915 08:09:09.225067   60028 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0915 08:09:09.225193   60028 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0915 08:09:09.225323   60028 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0915 08:09:09.225477   60028 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0915 08:09:09.225594   60028 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0915 08:09:09.225709   60028 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0915 08:09:09.226978   60028 kubeadm.go:310] W0915 08:05:07.685777   10376 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 08:09:09.227242   60028 kubeadm.go:310] W0915 08:05:07.686594   10376 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 08:09:09.227362   60028 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 08:09:09.227463   60028 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0915 08:09:09.227523   60028 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0915 08:09:09.227699   60028 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.776061ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.001736938s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0915 08:05:07.685777   10376 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0915 08:05:07.686594   10376 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0915 08:09:09.227768   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0915 08:09:10.163291   60028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 08:09:10.177399   60028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 08:09:10.186948   60028 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 08:09:10.186967   60028 kubeadm.go:157] found existing configuration files:
	
	I0915 08:09:10.187008   60028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 08:09:10.195834   60028 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 08:09:10.195887   60028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 08:09:10.205051   60028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 08:09:10.213683   60028 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 08:09:10.213735   60028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 08:09:10.223252   60028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 08:09:10.232009   60028 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 08:09:10.232077   60028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 08:09:10.241908   60028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 08:09:10.251638   60028 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 08:09:10.251685   60028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 08:09:10.260323   60028 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 08:09:10.301778   60028 kubeadm.go:310] W0915 08:09:10.272930   11141 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 08:09:10.302491   60028 kubeadm.go:310] W0915 08:09:10.273684   11141 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 08:09:10.418248   60028 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 08:09:08.218291   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:10.719924   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:08.339410   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:08.353008   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:08.353083   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:08.387561   61935 cri.go:89] found id: ""
	I0915 08:09:08.387587   61935 logs.go:276] 0 containers: []
	W0915 08:09:08.387604   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:08.387611   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:08.387673   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:08.428384   61935 cri.go:89] found id: ""
	I0915 08:09:08.428413   61935 logs.go:276] 0 containers: []
	W0915 08:09:08.428436   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:08.428443   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:08.428504   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:08.468597   61935 cri.go:89] found id: ""
	I0915 08:09:08.468619   61935 logs.go:276] 0 containers: []
	W0915 08:09:08.468628   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:08.468634   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:08.468688   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:08.505118   61935 cri.go:89] found id: ""
	I0915 08:09:08.505146   61935 logs.go:276] 0 containers: []
	W0915 08:09:08.505154   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:08.505159   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:08.505214   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:08.543027   61935 cri.go:89] found id: ""
	I0915 08:09:08.543054   61935 logs.go:276] 0 containers: []
	W0915 08:09:08.543062   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:08.543067   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:08.543114   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:08.578223   61935 cri.go:89] found id: ""
	I0915 08:09:08.578248   61935 logs.go:276] 0 containers: []
	W0915 08:09:08.578257   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:08.578262   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:08.578308   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:08.613588   61935 cri.go:89] found id: ""
	I0915 08:09:08.613610   61935 logs.go:276] 0 containers: []
	W0915 08:09:08.613618   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:08.613624   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:08.613668   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:08.650093   61935 cri.go:89] found id: ""
	I0915 08:09:08.650123   61935 logs.go:276] 0 containers: []
	W0915 08:09:08.650136   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:08.650153   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:08.650168   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:08.704239   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:08.704276   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:08.719790   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:08.719820   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:08.791062   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:08.791088   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:08.791106   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:08.868576   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:08.868615   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:11.406767   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:11.420793   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:11.420874   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:11.457324   61935 cri.go:89] found id: ""
	I0915 08:09:11.457353   61935 logs.go:276] 0 containers: []
	W0915 08:09:11.457365   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:11.457372   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:11.457433   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:11.498403   61935 cri.go:89] found id: ""
	I0915 08:09:11.498443   61935 logs.go:276] 0 containers: []
	W0915 08:09:11.498456   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:11.498464   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:11.498532   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:11.543804   61935 cri.go:89] found id: ""
	I0915 08:09:11.543833   61935 logs.go:276] 0 containers: []
	W0915 08:09:11.543844   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:11.543851   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:11.543910   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:11.582217   61935 cri.go:89] found id: ""
	I0915 08:09:11.582242   61935 logs.go:276] 0 containers: []
	W0915 08:09:11.582252   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:11.582259   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:11.582320   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:11.617510   61935 cri.go:89] found id: ""
	I0915 08:09:11.617537   61935 logs.go:276] 0 containers: []
	W0915 08:09:11.617549   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:11.617557   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:11.617616   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:11.652526   61935 cri.go:89] found id: ""
	I0915 08:09:11.652552   61935 logs.go:276] 0 containers: []
	W0915 08:09:11.652564   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:11.652572   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:11.652629   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:11.687664   61935 cri.go:89] found id: ""
	I0915 08:09:11.687689   61935 logs.go:276] 0 containers: []
	W0915 08:09:11.687697   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:11.687702   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:11.687758   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:11.729188   61935 cri.go:89] found id: ""
	I0915 08:09:11.729210   61935 logs.go:276] 0 containers: []
	W0915 08:09:11.729217   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:11.729226   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:11.729238   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:11.782774   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:11.782805   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:11.796812   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:11.796841   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:11.870059   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:11.870090   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:11.870102   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:11.961914   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:11.961950   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:12.011107   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:14.510417   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:13.218637   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:15.718022   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:17.719109   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:14.507529   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:14.520591   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:14.520656   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:14.558532   61935 cri.go:89] found id: ""
	I0915 08:09:14.558559   61935 logs.go:276] 0 containers: []
	W0915 08:09:14.558568   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:14.558574   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:14.558633   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:14.597535   61935 cri.go:89] found id: ""
	I0915 08:09:14.597558   61935 logs.go:276] 0 containers: []
	W0915 08:09:14.597567   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:14.597572   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:14.597630   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:14.632030   61935 cri.go:89] found id: ""
	I0915 08:09:14.632061   61935 logs.go:276] 0 containers: []
	W0915 08:09:14.632073   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:14.632080   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:14.632132   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:14.666056   61935 cri.go:89] found id: ""
	I0915 08:09:14.666081   61935 logs.go:276] 0 containers: []
	W0915 08:09:14.666091   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:14.666099   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:14.666160   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:14.700921   61935 cri.go:89] found id: ""
	I0915 08:09:14.700948   61935 logs.go:276] 0 containers: []
	W0915 08:09:14.700959   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:14.700966   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:14.701025   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:14.739438   61935 cri.go:89] found id: ""
	I0915 08:09:14.739463   61935 logs.go:276] 0 containers: []
	W0915 08:09:14.739474   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:14.739482   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:14.739537   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:14.774499   61935 cri.go:89] found id: ""
	I0915 08:09:14.774527   61935 logs.go:276] 0 containers: []
	W0915 08:09:14.774540   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:14.774547   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:14.774607   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:14.809303   61935 cri.go:89] found id: ""
	I0915 08:09:14.809326   61935 logs.go:276] 0 containers: []
	W0915 08:09:14.809334   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:14.809342   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:14.809352   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:14.863036   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:14.863066   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:14.876263   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:14.876293   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:14.963151   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:14.963178   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:14.963194   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:15.048069   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:15.048106   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:17.589612   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:17.602745   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:17.602803   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:17.639930   61935 cri.go:89] found id: ""
	I0915 08:09:17.639959   61935 logs.go:276] 0 containers: []
	W0915 08:09:17.639970   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:17.639978   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:17.640039   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:17.674793   61935 cri.go:89] found id: ""
	I0915 08:09:17.674822   61935 logs.go:276] 0 containers: []
	W0915 08:09:17.674833   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:17.674840   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:17.674904   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:17.709424   61935 cri.go:89] found id: ""
	I0915 08:09:17.709457   61935 logs.go:276] 0 containers: []
	W0915 08:09:17.709469   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:17.709476   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:17.709532   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:17.748709   61935 cri.go:89] found id: ""
	I0915 08:09:17.748792   61935 logs.go:276] 0 containers: []
	W0915 08:09:17.748811   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:17.748819   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:17.748882   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:17.784734   61935 cri.go:89] found id: ""
	I0915 08:09:17.784755   61935 logs.go:276] 0 containers: []
	W0915 08:09:17.784763   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:17.784768   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:17.784815   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:17.820817   61935 cri.go:89] found id: ""
	I0915 08:09:17.820845   61935 logs.go:276] 0 containers: []
	W0915 08:09:17.820856   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:17.820863   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:17.820923   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:17.854424   61935 cri.go:89] found id: ""
	I0915 08:09:17.854448   61935 logs.go:276] 0 containers: []
	W0915 08:09:17.854458   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:17.854464   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:17.854513   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:17.889163   61935 cri.go:89] found id: ""
	I0915 08:09:17.889194   61935 logs.go:276] 0 containers: []
	W0915 08:09:17.889205   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:17.889216   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:17.889228   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:16.510873   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:19.010246   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:20.218976   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:22.719087   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:17.940739   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:17.940776   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:17.955023   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:17.955050   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:18.028971   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:18.028999   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:18.029012   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:18.112776   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:18.112818   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:20.661290   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:20.674162   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:20.674239   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:20.708464   61935 cri.go:89] found id: ""
	I0915 08:09:20.708495   61935 logs.go:276] 0 containers: []
	W0915 08:09:20.708507   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:20.708517   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:20.708573   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:20.747672   61935 cri.go:89] found id: ""
	I0915 08:09:20.747701   61935 logs.go:276] 0 containers: []
	W0915 08:09:20.747711   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:20.747719   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:20.747781   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:20.783971   61935 cri.go:89] found id: ""
	I0915 08:09:20.784000   61935 logs.go:276] 0 containers: []
	W0915 08:09:20.784011   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:20.784018   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:20.784086   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:20.821176   61935 cri.go:89] found id: ""
	I0915 08:09:20.821203   61935 logs.go:276] 0 containers: []
	W0915 08:09:20.821214   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:20.821221   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:20.821281   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:20.856463   61935 cri.go:89] found id: ""
	I0915 08:09:20.856489   61935 logs.go:276] 0 containers: []
	W0915 08:09:20.856501   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:20.856509   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:20.856565   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:20.891274   61935 cri.go:89] found id: ""
	I0915 08:09:20.891300   61935 logs.go:276] 0 containers: []
	W0915 08:09:20.891312   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:20.891321   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:20.891385   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:20.925319   61935 cri.go:89] found id: ""
	I0915 08:09:20.925347   61935 logs.go:276] 0 containers: []
	W0915 08:09:20.925359   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:20.925366   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:20.925431   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:20.959080   61935 cri.go:89] found id: ""
	I0915 08:09:20.959110   61935 logs.go:276] 0 containers: []
	W0915 08:09:20.959122   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:20.959132   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:20.959145   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:21.011097   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:21.011131   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:21.025981   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:21.026006   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:21.094256   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:21.094287   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:21.094302   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:21.172805   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:21.172839   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:21.013204   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:23.509795   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:25.512330   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:24.722264   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:27.218200   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:23.713240   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:23.727577   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:23.727638   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:23.764985   61935 cri.go:89] found id: ""
	I0915 08:09:23.765016   61935 logs.go:276] 0 containers: []
	W0915 08:09:23.765028   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:23.765036   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:23.765099   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:23.802121   61935 cri.go:89] found id: ""
	I0915 08:09:23.802145   61935 logs.go:276] 0 containers: []
	W0915 08:09:23.802155   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:23.802163   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:23.802222   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:23.839158   61935 cri.go:89] found id: ""
	I0915 08:09:23.839186   61935 logs.go:276] 0 containers: []
	W0915 08:09:23.839196   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:23.839202   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:23.839259   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:23.873711   61935 cri.go:89] found id: ""
	I0915 08:09:23.873738   61935 logs.go:276] 0 containers: []
	W0915 08:09:23.873746   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:23.873751   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:23.873798   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:23.908508   61935 cri.go:89] found id: ""
	I0915 08:09:23.908530   61935 logs.go:276] 0 containers: []
	W0915 08:09:23.908537   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:23.908543   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:23.908589   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:23.943612   61935 cri.go:89] found id: ""
	I0915 08:09:23.943637   61935 logs.go:276] 0 containers: []
	W0915 08:09:23.943648   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:23.943655   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:23.943714   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:23.984097   61935 cri.go:89] found id: ""
	I0915 08:09:23.984122   61935 logs.go:276] 0 containers: []
	W0915 08:09:23.984130   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:23.984139   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:23.984198   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:24.028181   61935 cri.go:89] found id: ""
	I0915 08:09:24.028208   61935 logs.go:276] 0 containers: []
	W0915 08:09:24.028218   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:24.028226   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:24.028238   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:24.043054   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:24.043083   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:24.121726   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:24.121751   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:24.121765   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:24.202250   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:24.202288   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:24.245101   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:24.245134   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:26.812825   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:26.827813   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:26.827885   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:26.866803   61935 cri.go:89] found id: ""
	I0915 08:09:26.866833   61935 logs.go:276] 0 containers: []
	W0915 08:09:26.866842   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:26.866847   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:26.866895   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:26.904997   61935 cri.go:89] found id: ""
	I0915 08:09:26.905021   61935 logs.go:276] 0 containers: []
	W0915 08:09:26.905029   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:26.905034   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:26.905081   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:26.943595   61935 cri.go:89] found id: ""
	I0915 08:09:26.943619   61935 logs.go:276] 0 containers: []
	W0915 08:09:26.943633   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:26.943638   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:26.943687   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:26.982666   61935 cri.go:89] found id: ""
	I0915 08:09:26.982694   61935 logs.go:276] 0 containers: []
	W0915 08:09:26.982701   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:26.982709   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:26.982768   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:27.022732   61935 cri.go:89] found id: ""
	I0915 08:09:27.022755   61935 logs.go:276] 0 containers: []
	W0915 08:09:27.022763   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:27.022768   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:27.022847   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:27.067523   61935 cri.go:89] found id: ""
	I0915 08:09:27.067551   61935 logs.go:276] 0 containers: []
	W0915 08:09:27.067563   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:27.067570   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:27.067629   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:27.114401   61935 cri.go:89] found id: ""
	I0915 08:09:27.114434   61935 logs.go:276] 0 containers: []
	W0915 08:09:27.114446   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:27.114455   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:27.114517   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:27.162477   61935 cri.go:89] found id: ""
	I0915 08:09:27.162506   61935 logs.go:276] 0 containers: []
	W0915 08:09:27.162517   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:27.162535   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:27.162551   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:27.224535   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:27.224561   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:27.238381   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:27.238415   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:27.310883   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:27.310908   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:27.310924   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:27.387151   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:27.387187   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:28.010800   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:30.011208   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:29.218572   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:31.718399   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:29.925716   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:29.939174   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:29.939247   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:29.977221   61935 cri.go:89] found id: ""
	I0915 08:09:29.977249   61935 logs.go:276] 0 containers: []
	W0915 08:09:29.977260   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:29.977267   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:29.977328   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:30.017498   61935 cri.go:89] found id: ""
	I0915 08:09:30.017520   61935 logs.go:276] 0 containers: []
	W0915 08:09:30.017528   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:30.017533   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:30.017577   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:30.052768   61935 cri.go:89] found id: ""
	I0915 08:09:30.052796   61935 logs.go:276] 0 containers: []
	W0915 08:09:30.052809   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:30.052816   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:30.052876   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:30.087274   61935 cri.go:89] found id: ""
	I0915 08:09:30.087297   61935 logs.go:276] 0 containers: []
	W0915 08:09:30.087305   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:30.087311   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:30.087357   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:30.121209   61935 cri.go:89] found id: ""
	I0915 08:09:30.121238   61935 logs.go:276] 0 containers: []
	W0915 08:09:30.121249   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:30.121256   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:30.121311   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:30.163418   61935 cri.go:89] found id: ""
	I0915 08:09:30.163450   61935 logs.go:276] 0 containers: []
	W0915 08:09:30.163461   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:30.163469   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:30.163535   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:30.202090   61935 cri.go:89] found id: ""
	I0915 08:09:30.202116   61935 logs.go:276] 0 containers: []
	W0915 08:09:30.202127   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:30.202134   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:30.202201   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:30.237458   61935 cri.go:89] found id: ""
	I0915 08:09:30.237486   61935 logs.go:276] 0 containers: []
	W0915 08:09:30.237496   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:30.237506   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:30.237522   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:30.251028   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:30.251054   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:30.315293   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:30.315314   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:30.315326   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:30.392550   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:30.392584   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:30.431801   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:30.431827   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:32.011439   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:34.510832   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:33.720171   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:36.222532   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:32.986124   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:33.003400   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:33.003466   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:33.047351   61935 cri.go:89] found id: ""
	I0915 08:09:33.047380   61935 logs.go:276] 0 containers: []
	W0915 08:09:33.047393   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:33.047403   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:33.047462   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:33.090491   61935 cri.go:89] found id: ""
	I0915 08:09:33.090518   61935 logs.go:276] 0 containers: []
	W0915 08:09:33.090528   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:33.090537   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:33.090597   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:33.150080   61935 cri.go:89] found id: ""
	I0915 08:09:33.150109   61935 logs.go:276] 0 containers: []
	W0915 08:09:33.150123   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:33.150130   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:33.150199   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:33.185253   61935 cri.go:89] found id: ""
	I0915 08:09:33.185280   61935 logs.go:276] 0 containers: []
	W0915 08:09:33.185290   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:33.185296   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:33.185348   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:33.221270   61935 cri.go:89] found id: ""
	I0915 08:09:33.221292   61935 logs.go:276] 0 containers: []
	W0915 08:09:33.221299   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:33.221305   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:33.221351   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:33.261550   61935 cri.go:89] found id: ""
	I0915 08:09:33.261574   61935 logs.go:276] 0 containers: []
	W0915 08:09:33.261582   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:33.261588   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:33.261637   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:33.301473   61935 cri.go:89] found id: ""
	I0915 08:09:33.301504   61935 logs.go:276] 0 containers: []
	W0915 08:09:33.301516   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:33.301523   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:33.301580   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:33.345197   61935 cri.go:89] found id: ""
	I0915 08:09:33.345219   61935 logs.go:276] 0 containers: []
	W0915 08:09:33.345227   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:33.345235   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:33.345245   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:33.391720   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:33.391755   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:33.444529   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:33.444562   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:33.458221   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:33.458249   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:33.525768   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:33.525789   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:33.525801   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:36.108157   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:36.122538   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:36.122643   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:36.155489   61935 cri.go:89] found id: ""
	I0915 08:09:36.155515   61935 logs.go:276] 0 containers: []
	W0915 08:09:36.155523   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:36.155528   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:36.155576   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:36.195511   61935 cri.go:89] found id: ""
	I0915 08:09:36.195537   61935 logs.go:276] 0 containers: []
	W0915 08:09:36.195547   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:36.195553   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:36.195616   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:36.240115   61935 cri.go:89] found id: ""
	I0915 08:09:36.240137   61935 logs.go:276] 0 containers: []
	W0915 08:09:36.240145   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:36.240150   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:36.240200   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:36.280048   61935 cri.go:89] found id: ""
	I0915 08:09:36.280074   61935 logs.go:276] 0 containers: []
	W0915 08:09:36.280084   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:36.280092   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:36.280156   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:36.318815   61935 cri.go:89] found id: ""
	I0915 08:09:36.318841   61935 logs.go:276] 0 containers: []
	W0915 08:09:36.318852   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:36.318859   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:36.318920   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:36.359490   61935 cri.go:89] found id: ""
	I0915 08:09:36.359512   61935 logs.go:276] 0 containers: []
	W0915 08:09:36.359520   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:36.359526   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:36.359578   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:36.399029   61935 cri.go:89] found id: ""
	I0915 08:09:36.399055   61935 logs.go:276] 0 containers: []
	W0915 08:09:36.399063   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:36.399069   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:36.399117   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:36.434038   61935 cri.go:89] found id: ""
	I0915 08:09:36.434069   61935 logs.go:276] 0 containers: []
	W0915 08:09:36.434080   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:36.434091   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:36.434103   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:36.487958   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:36.487992   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:36.501132   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:36.501159   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:36.574447   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:36.574471   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:36.574487   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:36.649466   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:36.649497   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:37.010487   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:39.011761   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:38.719831   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:41.218662   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:39.184950   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:39.198631   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:39.198699   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:39.234733   61935 cri.go:89] found id: ""
	I0915 08:09:39.234755   61935 logs.go:276] 0 containers: []
	W0915 08:09:39.234764   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:39.234770   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:39.234817   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:39.270437   61935 cri.go:89] found id: ""
	I0915 08:09:39.270458   61935 logs.go:276] 0 containers: []
	W0915 08:09:39.270467   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:39.270472   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:39.270520   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:39.306541   61935 cri.go:89] found id: ""
	I0915 08:09:39.306571   61935 logs.go:276] 0 containers: []
	W0915 08:09:39.306582   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:39.306589   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:39.306647   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:39.340312   61935 cri.go:89] found id: ""
	I0915 08:09:39.340336   61935 logs.go:276] 0 containers: []
	W0915 08:09:39.340344   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:39.340350   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:39.340409   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:39.379330   61935 cri.go:89] found id: ""
	I0915 08:09:39.379358   61935 logs.go:276] 0 containers: []
	W0915 08:09:39.379368   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:39.379374   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:39.379427   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:39.416122   61935 cri.go:89] found id: ""
	I0915 08:09:39.416148   61935 logs.go:276] 0 containers: []
	W0915 08:09:39.416156   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:39.416169   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:39.416224   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:39.450438   61935 cri.go:89] found id: ""
	I0915 08:09:39.450466   61935 logs.go:276] 0 containers: []
	W0915 08:09:39.450475   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:39.450480   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:39.450540   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:39.485541   61935 cri.go:89] found id: ""
	I0915 08:09:39.485569   61935 logs.go:276] 0 containers: []
	W0915 08:09:39.485578   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:39.485587   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:39.485603   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:39.499403   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:39.499429   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:39.572730   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:39.572758   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:39.572773   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:39.656651   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:39.656683   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:39.696872   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:39.696896   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:42.253904   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:42.269134   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:42.269206   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:42.313601   61935 cri.go:89] found id: ""
	I0915 08:09:42.313624   61935 logs.go:276] 0 containers: []
	W0915 08:09:42.313631   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:42.313637   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:42.313691   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:42.356529   61935 cri.go:89] found id: ""
	I0915 08:09:42.356552   61935 logs.go:276] 0 containers: []
	W0915 08:09:42.356560   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:42.356565   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:42.356620   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:42.391084   61935 cri.go:89] found id: ""
	I0915 08:09:42.391105   61935 logs.go:276] 0 containers: []
	W0915 08:09:42.391114   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:42.391120   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:42.391179   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:42.426383   61935 cri.go:89] found id: ""
	I0915 08:09:42.426409   61935 logs.go:276] 0 containers: []
	W0915 08:09:42.426428   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:42.426435   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:42.426490   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:42.461537   61935 cri.go:89] found id: ""
	I0915 08:09:42.461565   61935 logs.go:276] 0 containers: []
	W0915 08:09:42.461574   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:42.461579   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:42.461638   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:42.495552   61935 cri.go:89] found id: ""
	I0915 08:09:42.495581   61935 logs.go:276] 0 containers: []
	W0915 08:09:42.495592   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:42.495600   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:42.495665   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:42.530929   61935 cri.go:89] found id: ""
	I0915 08:09:42.530951   61935 logs.go:276] 0 containers: []
	W0915 08:09:42.530960   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:42.530966   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:42.531012   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:42.572552   61935 cri.go:89] found id: ""
	I0915 08:09:42.572582   61935 logs.go:276] 0 containers: []
	W0915 08:09:42.572593   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:42.572605   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:42.572619   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:42.628375   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:42.628411   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:42.643389   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:42.643419   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:42.720959   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:42.720985   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:42.721000   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:42.798422   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:42.798456   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:41.511715   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:44.011152   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:43.218967   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:45.723199   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:45.335617   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:45.349682   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:45.349758   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:45.383997   61935 cri.go:89] found id: ""
	I0915 08:09:45.384021   61935 logs.go:276] 0 containers: []
	W0915 08:09:45.384029   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:45.384034   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:45.384084   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:45.418858   61935 cri.go:89] found id: ""
	I0915 08:09:45.418889   61935 logs.go:276] 0 containers: []
	W0915 08:09:45.418899   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:45.418905   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:45.418966   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:45.453758   61935 cri.go:89] found id: ""
	I0915 08:09:45.453781   61935 logs.go:276] 0 containers: []
	W0915 08:09:45.453790   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:45.453796   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:45.453863   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:45.488623   61935 cri.go:89] found id: ""
	I0915 08:09:45.488650   61935 logs.go:276] 0 containers: []
	W0915 08:09:45.488660   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:45.488667   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:45.488729   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:45.523418   61935 cri.go:89] found id: ""
	I0915 08:09:45.523440   61935 logs.go:276] 0 containers: []
	W0915 08:09:45.523450   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:45.523458   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:45.523510   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:45.556425   61935 cri.go:89] found id: ""
	I0915 08:09:45.556458   61935 logs.go:276] 0 containers: []
	W0915 08:09:45.556469   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:45.556477   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:45.556530   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:45.595148   61935 cri.go:89] found id: ""
	I0915 08:09:45.595186   61935 logs.go:276] 0 containers: []
	W0915 08:09:45.595196   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:45.595202   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:45.595263   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:45.630716   61935 cri.go:89] found id: ""
	I0915 08:09:45.630740   61935 logs.go:276] 0 containers: []
	W0915 08:09:45.630748   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:45.630756   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:45.630767   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:45.704755   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:45.704781   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:45.704797   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:45.785466   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:45.785503   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:45.824489   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:45.824512   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:45.877290   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:45.877328   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:46.510397   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:48.511081   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:48.218567   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:50.718757   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:52.719752   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:48.391656   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:48.404939   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:48.404995   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:48.439080   61935 cri.go:89] found id: ""
	I0915 08:09:48.439106   61935 logs.go:276] 0 containers: []
	W0915 08:09:48.439117   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:48.439124   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:48.439172   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:48.471829   61935 cri.go:89] found id: ""
	I0915 08:09:48.471857   61935 logs.go:276] 0 containers: []
	W0915 08:09:48.471869   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:48.471876   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:48.471933   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:48.505515   61935 cri.go:89] found id: ""
	I0915 08:09:48.505540   61935 logs.go:276] 0 containers: []
	W0915 08:09:48.505550   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:48.505557   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:48.505619   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:48.541149   61935 cri.go:89] found id: ""
	I0915 08:09:48.541177   61935 logs.go:276] 0 containers: []
	W0915 08:09:48.541189   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:48.541197   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:48.541258   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:48.573267   61935 cri.go:89] found id: ""
	I0915 08:09:48.573297   61935 logs.go:276] 0 containers: []
	W0915 08:09:48.573310   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:48.573317   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:48.573382   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:48.608825   61935 cri.go:89] found id: ""
	I0915 08:09:48.608852   61935 logs.go:276] 0 containers: []
	W0915 08:09:48.608863   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:48.608872   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:48.608929   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:48.642622   61935 cri.go:89] found id: ""
	I0915 08:09:48.642650   61935 logs.go:276] 0 containers: []
	W0915 08:09:48.642661   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:48.642669   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:48.642730   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:48.678787   61935 cri.go:89] found id: ""
	I0915 08:09:48.678812   61935 logs.go:276] 0 containers: []
	W0915 08:09:48.678821   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:48.678832   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:48.678846   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:48.733822   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:48.733859   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:48.749084   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:48.749111   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:48.826734   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:48.826758   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:48.826772   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:48.907424   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:48.907459   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:51.451265   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:51.465785   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:51.465875   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:51.505241   61935 cri.go:89] found id: ""
	I0915 08:09:51.505268   61935 logs.go:276] 0 containers: []
	W0915 08:09:51.505279   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:51.505292   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:51.505356   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:51.542121   61935 cri.go:89] found id: ""
	I0915 08:09:51.542154   61935 logs.go:276] 0 containers: []
	W0915 08:09:51.542165   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:51.542179   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:51.542243   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:51.577446   61935 cri.go:89] found id: ""
	I0915 08:09:51.577473   61935 logs.go:276] 0 containers: []
	W0915 08:09:51.577482   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:51.577487   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:51.577534   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:51.616818   61935 cri.go:89] found id: ""
	I0915 08:09:51.616845   61935 logs.go:276] 0 containers: []
	W0915 08:09:51.616856   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:51.616863   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:51.616923   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:51.652408   61935 cri.go:89] found id: ""
	I0915 08:09:51.652435   61935 logs.go:276] 0 containers: []
	W0915 08:09:51.652444   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:51.652449   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:51.652499   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:51.692330   61935 cri.go:89] found id: ""
	I0915 08:09:51.692361   61935 logs.go:276] 0 containers: []
	W0915 08:09:51.692372   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:51.692380   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:51.692446   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:51.730592   61935 cri.go:89] found id: ""
	I0915 08:09:51.730617   61935 logs.go:276] 0 containers: []
	W0915 08:09:51.730625   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:51.730631   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:51.730715   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:51.771088   61935 cri.go:89] found id: ""
	I0915 08:09:51.771112   61935 logs.go:276] 0 containers: []
	W0915 08:09:51.771120   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:51.771129   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:51.771139   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:51.822791   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:51.822819   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:51.835625   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:51.835653   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:51.906676   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:51.906694   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:51.906707   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:51.989112   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:51.989162   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:51.011235   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:53.511158   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:55.218960   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:57.219615   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:54.532713   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:54.546555   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:54.546625   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:54.580610   61935 cri.go:89] found id: ""
	I0915 08:09:54.580640   61935 logs.go:276] 0 containers: []
	W0915 08:09:54.580652   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:54.580660   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:54.580718   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:54.616041   61935 cri.go:89] found id: ""
	I0915 08:09:54.616066   61935 logs.go:276] 0 containers: []
	W0915 08:09:54.616076   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:54.616083   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:54.616144   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:54.651881   61935 cri.go:89] found id: ""
	I0915 08:09:54.651907   61935 logs.go:276] 0 containers: []
	W0915 08:09:54.651916   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:54.651922   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:54.651979   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:54.687113   61935 cri.go:89] found id: ""
	I0915 08:09:54.687138   61935 logs.go:276] 0 containers: []
	W0915 08:09:54.687150   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:54.687158   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:54.687224   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:54.723234   61935 cri.go:89] found id: ""
	I0915 08:09:54.723255   61935 logs.go:276] 0 containers: []
	W0915 08:09:54.723263   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:54.723267   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:54.723312   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:54.763588   61935 cri.go:89] found id: ""
	I0915 08:09:54.763613   61935 logs.go:276] 0 containers: []
	W0915 08:09:54.763622   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:54.763627   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:54.763673   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:54.802796   61935 cri.go:89] found id: ""
	I0915 08:09:54.802817   61935 logs.go:276] 0 containers: []
	W0915 08:09:54.802824   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:54.802829   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:54.802882   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:54.841709   61935 cri.go:89] found id: ""
	I0915 08:09:54.841737   61935 logs.go:276] 0 containers: []
	W0915 08:09:54.841745   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:54.841754   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:54.841767   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:54.854743   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:54.854774   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:54.923369   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:09:54.923393   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:54.923408   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:55.010746   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:55.010786   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:55.051218   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:55.051249   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:57.602818   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:09:57.615723   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:09:57.615789   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:09:57.651644   61935 cri.go:89] found id: ""
	I0915 08:09:57.651673   61935 logs.go:276] 0 containers: []
	W0915 08:09:57.651684   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:09:57.651691   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:09:57.651748   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:09:57.684648   61935 cri.go:89] found id: ""
	I0915 08:09:57.684672   61935 logs.go:276] 0 containers: []
	W0915 08:09:57.684680   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:09:57.684685   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:09:57.684742   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:09:57.720411   61935 cri.go:89] found id: ""
	I0915 08:09:57.720452   61935 logs.go:276] 0 containers: []
	W0915 08:09:57.720463   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:09:57.720471   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:09:57.720529   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:09:57.759580   61935 cri.go:89] found id: ""
	I0915 08:09:57.759613   61935 logs.go:276] 0 containers: []
	W0915 08:09:57.759627   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:09:57.759634   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:09:57.759690   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:09:57.795086   61935 cri.go:89] found id: ""
	I0915 08:09:57.795116   61935 logs.go:276] 0 containers: []
	W0915 08:09:57.795128   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:09:57.795136   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:09:57.795204   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:09:57.828945   61935 cri.go:89] found id: ""
	I0915 08:09:57.828969   61935 logs.go:276] 0 containers: []
	W0915 08:09:57.828977   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:09:57.828983   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:09:57.829029   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:09:57.861999   61935 cri.go:89] found id: ""
	I0915 08:09:57.862027   61935 logs.go:276] 0 containers: []
	W0915 08:09:57.862038   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:09:57.862046   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:09:57.862104   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:09:57.896919   61935 cri.go:89] found id: ""
	I0915 08:09:57.896945   61935 logs.go:276] 0 containers: []
	W0915 08:09:57.896956   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:09:57.896967   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:09:57.896984   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:09:56.010603   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:58.011098   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:00.012297   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:59.719360   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:01.719827   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:09:57.980349   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:09:57.980387   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:09:58.021986   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:09:58.022010   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:09:58.072795   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:09:58.072828   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:09:58.086080   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:09:58.086107   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:09:58.156776   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:00.657499   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:00.671649   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:00.671721   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:00.708542   61935 cri.go:89] found id: ""
	I0915 08:10:00.708571   61935 logs.go:276] 0 containers: []
	W0915 08:10:00.708583   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:00.708589   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:00.708658   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:00.747842   61935 cri.go:89] found id: ""
	I0915 08:10:00.747869   61935 logs.go:276] 0 containers: []
	W0915 08:10:00.747877   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:00.747883   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:00.747931   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:00.785455   61935 cri.go:89] found id: ""
	I0915 08:10:00.785480   61935 logs.go:276] 0 containers: []
	W0915 08:10:00.785489   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:00.785494   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:00.785543   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:00.821276   61935 cri.go:89] found id: ""
	I0915 08:10:00.821300   61935 logs.go:276] 0 containers: []
	W0915 08:10:00.821308   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:00.821315   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:00.821361   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:00.858059   61935 cri.go:89] found id: ""
	I0915 08:10:00.858091   61935 logs.go:276] 0 containers: []
	W0915 08:10:00.858104   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:00.858113   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:00.858213   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:00.894322   61935 cri.go:89] found id: ""
	I0915 08:10:00.894344   61935 logs.go:276] 0 containers: []
	W0915 08:10:00.894352   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:00.894357   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:00.894413   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:00.929715   61935 cri.go:89] found id: ""
	I0915 08:10:00.929746   61935 logs.go:276] 0 containers: []
	W0915 08:10:00.929756   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:00.929763   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:00.929834   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:00.968440   61935 cri.go:89] found id: ""
	I0915 08:10:00.968470   61935 logs.go:276] 0 containers: []
	W0915 08:10:00.968481   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:00.968492   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:00.968506   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:01.052002   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:01.052039   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:01.093997   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:01.094024   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:01.147579   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:01.147615   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:01.166569   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:01.166598   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:01.238963   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:02.509979   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:04.510807   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:03.721668   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:06.219345   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:03.739926   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:03.754083   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:03.754162   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:03.799511   61935 cri.go:89] found id: ""
	I0915 08:10:03.799533   61935 logs.go:276] 0 containers: []
	W0915 08:10:03.799541   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:03.799547   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:03.799601   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:03.834483   61935 cri.go:89] found id: ""
	I0915 08:10:03.834504   61935 logs.go:276] 0 containers: []
	W0915 08:10:03.834512   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:03.834517   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:03.834566   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:03.868547   61935 cri.go:89] found id: ""
	I0915 08:10:03.868580   61935 logs.go:276] 0 containers: []
	W0915 08:10:03.868588   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:03.868594   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:03.868647   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:03.901942   61935 cri.go:89] found id: ""
	I0915 08:10:03.901974   61935 logs.go:276] 0 containers: []
	W0915 08:10:03.901985   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:03.901993   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:03.902054   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:03.936696   61935 cri.go:89] found id: ""
	I0915 08:10:03.936726   61935 logs.go:276] 0 containers: []
	W0915 08:10:03.936736   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:03.936744   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:03.936798   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:03.970816   61935 cri.go:89] found id: ""
	I0915 08:10:03.970842   61935 logs.go:276] 0 containers: []
	W0915 08:10:03.970850   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:03.970856   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:03.970903   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:04.008769   61935 cri.go:89] found id: ""
	I0915 08:10:04.008793   61935 logs.go:276] 0 containers: []
	W0915 08:10:04.008803   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:04.008809   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:04.008883   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:04.046272   61935 cri.go:89] found id: ""
	I0915 08:10:04.046298   61935 logs.go:276] 0 containers: []
	W0915 08:10:04.046309   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:04.046319   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:04.046332   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:04.097215   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:04.097249   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:04.111551   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:04.111581   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:04.193395   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:04.193420   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:04.193435   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:04.277312   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:04.277348   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:06.820022   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:06.834086   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:06.834144   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:06.870889   61935 cri.go:89] found id: ""
	I0915 08:10:06.870911   61935 logs.go:276] 0 containers: []
	W0915 08:10:06.870919   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:06.870924   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:06.870970   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:06.924090   61935 cri.go:89] found id: ""
	I0915 08:10:06.924115   61935 logs.go:276] 0 containers: []
	W0915 08:10:06.924126   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:06.924133   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:06.924204   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:06.960273   61935 cri.go:89] found id: ""
	I0915 08:10:06.960298   61935 logs.go:276] 0 containers: []
	W0915 08:10:06.960306   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:06.960311   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:06.960359   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:06.996400   61935 cri.go:89] found id: ""
	I0915 08:10:06.996437   61935 logs.go:276] 0 containers: []
	W0915 08:10:06.996447   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:06.996455   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:06.996517   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:07.041717   61935 cri.go:89] found id: ""
	I0915 08:10:07.041763   61935 logs.go:276] 0 containers: []
	W0915 08:10:07.041775   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:07.041783   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:07.041878   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:07.081080   61935 cri.go:89] found id: ""
	I0915 08:10:07.081118   61935 logs.go:276] 0 containers: []
	W0915 08:10:07.081127   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:07.081133   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:07.081181   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:07.121084   61935 cri.go:89] found id: ""
	I0915 08:10:07.121122   61935 logs.go:276] 0 containers: []
	W0915 08:10:07.121133   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:07.121140   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:07.121200   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:07.154210   61935 cri.go:89] found id: ""
	I0915 08:10:07.154236   61935 logs.go:276] 0 containers: []
	W0915 08:10:07.154246   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:07.154255   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:07.154265   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:07.207151   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:07.207187   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:07.222877   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:07.222904   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:07.299310   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:07.299336   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:07.299352   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:07.379026   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:07.379058   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:06.511323   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:09.011050   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:08.719008   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:11.221415   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:09.919187   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:09.932243   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:09.932317   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:09.971295   61935 cri.go:89] found id: ""
	I0915 08:10:09.971317   61935 logs.go:276] 0 containers: []
	W0915 08:10:09.971326   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:09.971331   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:09.971376   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:10.012410   61935 cri.go:89] found id: ""
	I0915 08:10:10.012435   61935 logs.go:276] 0 containers: []
	W0915 08:10:10.012446   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:10.012452   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:10.012511   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:10.051495   61935 cri.go:89] found id: ""
	I0915 08:10:10.051518   61935 logs.go:276] 0 containers: []
	W0915 08:10:10.051526   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:10.051531   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:10.051607   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:10.091935   61935 cri.go:89] found id: ""
	I0915 08:10:10.091958   61935 logs.go:276] 0 containers: []
	W0915 08:10:10.091965   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:10.091971   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:10.092036   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:10.135287   61935 cri.go:89] found id: ""
	I0915 08:10:10.135313   61935 logs.go:276] 0 containers: []
	W0915 08:10:10.135324   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:10.135331   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:10.135394   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:10.170784   61935 cri.go:89] found id: ""
	I0915 08:10:10.170811   61935 logs.go:276] 0 containers: []
	W0915 08:10:10.170819   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:10.170825   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:10.170877   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:10.205683   61935 cri.go:89] found id: ""
	I0915 08:10:10.205707   61935 logs.go:276] 0 containers: []
	W0915 08:10:10.205716   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:10.205721   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:10.205767   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:10.242555   61935 cri.go:89] found id: ""
	I0915 08:10:10.242581   61935 logs.go:276] 0 containers: []
	W0915 08:10:10.242588   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:10.242598   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:10.242608   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:10.257155   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:10.257181   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:10.327171   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:10.327195   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:10.327212   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:10.408905   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:10.408948   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:10.449193   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:10.449217   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:11.511133   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:14.010865   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:13.717919   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:15.718217   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:17.719156   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:13.001739   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:13.017381   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:13.017450   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:13.056303   61935 cri.go:89] found id: ""
	I0915 08:10:13.056332   61935 logs.go:276] 0 containers: []
	W0915 08:10:13.056343   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:13.056349   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:13.056407   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:13.092192   61935 cri.go:89] found id: ""
	I0915 08:10:13.092218   61935 logs.go:276] 0 containers: []
	W0915 08:10:13.092226   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:13.092232   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:13.092279   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:13.134190   61935 cri.go:89] found id: ""
	I0915 08:10:13.134217   61935 logs.go:276] 0 containers: []
	W0915 08:10:13.134225   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:13.134231   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:13.134284   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:13.174227   61935 cri.go:89] found id: ""
	I0915 08:10:13.174252   61935 logs.go:276] 0 containers: []
	W0915 08:10:13.174261   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:13.174266   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:13.174311   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:13.209827   61935 cri.go:89] found id: ""
	I0915 08:10:13.209855   61935 logs.go:276] 0 containers: []
	W0915 08:10:13.209863   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:13.209869   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:13.209919   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:13.246238   61935 cri.go:89] found id: ""
	I0915 08:10:13.246262   61935 logs.go:276] 0 containers: []
	W0915 08:10:13.246270   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:13.246276   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:13.246332   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:13.283111   61935 cri.go:89] found id: ""
	I0915 08:10:13.283136   61935 logs.go:276] 0 containers: []
	W0915 08:10:13.283145   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:13.283153   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:13.283220   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:13.325825   61935 cri.go:89] found id: ""
	I0915 08:10:13.325852   61935 logs.go:276] 0 containers: []
	W0915 08:10:13.325863   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:13.325873   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:13.325890   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:13.372945   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:13.372968   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:13.442895   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:13.442931   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:13.457046   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:13.457074   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:13.527209   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:13.527233   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:13.527248   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:16.104063   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:16.117928   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:16.117990   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:16.156883   61935 cri.go:89] found id: ""
	I0915 08:10:16.156906   61935 logs.go:276] 0 containers: []
	W0915 08:10:16.156913   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:16.156919   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:16.156963   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:16.190558   61935 cri.go:89] found id: ""
	I0915 08:10:16.190587   61935 logs.go:276] 0 containers: []
	W0915 08:10:16.190599   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:16.190609   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:16.190671   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:16.224651   61935 cri.go:89] found id: ""
	I0915 08:10:16.224677   61935 logs.go:276] 0 containers: []
	W0915 08:10:16.224685   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:16.224690   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:16.224746   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:16.259176   61935 cri.go:89] found id: ""
	I0915 08:10:16.259204   61935 logs.go:276] 0 containers: []
	W0915 08:10:16.259215   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:16.259223   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:16.259281   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:16.294089   61935 cri.go:89] found id: ""
	I0915 08:10:16.294127   61935 logs.go:276] 0 containers: []
	W0915 08:10:16.294138   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:16.294145   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:16.294213   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:16.330621   61935 cri.go:89] found id: ""
	I0915 08:10:16.330649   61935 logs.go:276] 0 containers: []
	W0915 08:10:16.330660   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:16.330667   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:16.330727   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:16.364621   61935 cri.go:89] found id: ""
	I0915 08:10:16.364650   61935 logs.go:276] 0 containers: []
	W0915 08:10:16.364662   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:16.364670   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:16.364721   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:16.398793   61935 cri.go:89] found id: ""
	I0915 08:10:16.398820   61935 logs.go:276] 0 containers: []
	W0915 08:10:16.398832   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:16.398841   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:16.398853   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:16.450294   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:16.450332   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:16.464807   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:16.464836   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:16.534585   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:16.534608   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:16.534625   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:16.614314   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:16.614350   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:16.015468   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:18.513436   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:19.719673   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:22.219235   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:19.159956   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:19.180281   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:19.180362   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:19.240198   61935 cri.go:89] found id: ""
	I0915 08:10:19.240225   61935 logs.go:276] 0 containers: []
	W0915 08:10:19.240236   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:19.240244   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:19.240306   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:19.294455   61935 cri.go:89] found id: ""
	I0915 08:10:19.294483   61935 logs.go:276] 0 containers: []
	W0915 08:10:19.294495   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:19.294502   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:19.294565   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:19.340829   61935 cri.go:89] found id: ""
	I0915 08:10:19.340854   61935 logs.go:276] 0 containers: []
	W0915 08:10:19.340865   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:19.340872   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:19.340935   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:19.376111   61935 cri.go:89] found id: ""
	I0915 08:10:19.376134   61935 logs.go:276] 0 containers: []
	W0915 08:10:19.376142   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:19.376148   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:19.376200   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:19.409199   61935 cri.go:89] found id: ""
	I0915 08:10:19.409226   61935 logs.go:276] 0 containers: []
	W0915 08:10:19.409234   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:19.409239   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:19.409290   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:19.444350   61935 cri.go:89] found id: ""
	I0915 08:10:19.444381   61935 logs.go:276] 0 containers: []
	W0915 08:10:19.444392   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:19.444400   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:19.444452   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:19.477383   61935 cri.go:89] found id: ""
	I0915 08:10:19.477408   61935 logs.go:276] 0 containers: []
	W0915 08:10:19.477417   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:19.477422   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:19.477469   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:19.513772   61935 cri.go:89] found id: ""
	I0915 08:10:19.513794   61935 logs.go:276] 0 containers: []
	W0915 08:10:19.513802   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:19.513817   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:19.513828   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:19.565634   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:19.565671   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:19.580640   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:19.580679   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:19.659305   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:19.659332   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:19.659347   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:19.751782   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:19.751822   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:22.294726   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:22.309453   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:22.309514   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:22.345471   61935 cri.go:89] found id: ""
	I0915 08:10:22.345502   61935 logs.go:276] 0 containers: []
	W0915 08:10:22.345513   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:22.345521   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:22.345582   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:22.379284   61935 cri.go:89] found id: ""
	I0915 08:10:22.379309   61935 logs.go:276] 0 containers: []
	W0915 08:10:22.379320   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:22.379327   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:22.379389   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:22.416094   61935 cri.go:89] found id: ""
	I0915 08:10:22.416120   61935 logs.go:276] 0 containers: []
	W0915 08:10:22.416131   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:22.416138   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:22.416198   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:22.452350   61935 cri.go:89] found id: ""
	I0915 08:10:22.452376   61935 logs.go:276] 0 containers: []
	W0915 08:10:22.452385   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:22.452390   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:22.452454   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:22.484259   61935 cri.go:89] found id: ""
	I0915 08:10:22.484284   61935 logs.go:276] 0 containers: []
	W0915 08:10:22.484293   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:22.484298   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:22.484367   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:22.520032   61935 cri.go:89] found id: ""
	I0915 08:10:22.520054   61935 logs.go:276] 0 containers: []
	W0915 08:10:22.520062   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:22.520068   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:22.520115   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:22.557564   61935 cri.go:89] found id: ""
	I0915 08:10:22.557590   61935 logs.go:276] 0 containers: []
	W0915 08:10:22.557599   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:22.557605   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:22.557649   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:22.595468   61935 cri.go:89] found id: ""
	I0915 08:10:22.595494   61935 logs.go:276] 0 containers: []
	W0915 08:10:22.595503   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:22.595511   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:22.595523   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:22.648186   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:22.648221   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:22.662031   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:22.662064   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:22.742843   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:22.742866   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:22.742876   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:22.822283   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:22.822317   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:21.010955   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:23.011127   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:25.512379   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:24.719623   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:27.218654   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:25.364647   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:25.377669   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:25.377753   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:25.410746   61935 cri.go:89] found id: ""
	I0915 08:10:25.410767   61935 logs.go:276] 0 containers: []
	W0915 08:10:25.410775   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:25.410782   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:25.410826   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:25.446618   61935 cri.go:89] found id: ""
	I0915 08:10:25.446642   61935 logs.go:276] 0 containers: []
	W0915 08:10:25.446650   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:25.446655   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:25.446702   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:25.484924   61935 cri.go:89] found id: ""
	I0915 08:10:25.484954   61935 logs.go:276] 0 containers: []
	W0915 08:10:25.484966   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:25.484974   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:25.485036   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:25.526381   61935 cri.go:89] found id: ""
	I0915 08:10:25.526409   61935 logs.go:276] 0 containers: []
	W0915 08:10:25.526420   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:25.526428   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:25.526486   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:25.560638   61935 cri.go:89] found id: ""
	I0915 08:10:25.560660   61935 logs.go:276] 0 containers: []
	W0915 08:10:25.560668   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:25.560674   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:25.560719   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:25.595304   61935 cri.go:89] found id: ""
	I0915 08:10:25.595332   61935 logs.go:276] 0 containers: []
	W0915 08:10:25.595343   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:25.595350   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:25.595408   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:25.629299   61935 cri.go:89] found id: ""
	I0915 08:10:25.629325   61935 logs.go:276] 0 containers: []
	W0915 08:10:25.629334   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:25.629341   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:25.629393   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:25.663043   61935 cri.go:89] found id: ""
	I0915 08:10:25.663068   61935 logs.go:276] 0 containers: []
	W0915 08:10:25.663077   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:25.663085   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:25.663097   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:25.711805   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:25.711838   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:25.727116   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:25.727140   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:25.802336   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:25.802365   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:25.802378   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:25.878747   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:25.878792   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:28.011398   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:30.510405   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:29.219973   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:31.720040   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:28.417908   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:28.433156   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:28.433232   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:28.468084   61935 cri.go:89] found id: ""
	I0915 08:10:28.468109   61935 logs.go:276] 0 containers: []
	W0915 08:10:28.468119   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:28.468127   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:28.468195   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:28.502715   61935 cri.go:89] found id: ""
	I0915 08:10:28.502743   61935 logs.go:276] 0 containers: []
	W0915 08:10:28.502754   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:28.502763   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:28.502829   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:28.538850   61935 cri.go:89] found id: ""
	I0915 08:10:28.538877   61935 logs.go:276] 0 containers: []
	W0915 08:10:28.538887   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:28.538894   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:28.538957   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:28.572362   61935 cri.go:89] found id: ""
	I0915 08:10:28.572384   61935 logs.go:276] 0 containers: []
	W0915 08:10:28.572392   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:28.572397   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:28.572445   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:28.607451   61935 cri.go:89] found id: ""
	I0915 08:10:28.607475   61935 logs.go:276] 0 containers: []
	W0915 08:10:28.607483   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:28.607488   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:28.607539   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:28.648852   61935 cri.go:89] found id: ""
	I0915 08:10:28.648874   61935 logs.go:276] 0 containers: []
	W0915 08:10:28.648882   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:28.648888   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:28.648938   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:28.685126   61935 cri.go:89] found id: ""
	I0915 08:10:28.685160   61935 logs.go:276] 0 containers: []
	W0915 08:10:28.685170   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:28.685178   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:28.685236   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:28.730608   61935 cri.go:89] found id: ""
	I0915 08:10:28.730640   61935 logs.go:276] 0 containers: []
	W0915 08:10:28.730652   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:28.730663   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:28.730677   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:28.808620   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:28.808654   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:28.848059   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:28.848088   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:28.899671   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:28.899704   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:28.913172   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:28.913210   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:28.983119   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:31.483828   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:31.497665   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:31.497742   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:31.532820   61935 cri.go:89] found id: ""
	I0915 08:10:31.532842   61935 logs.go:276] 0 containers: []
	W0915 08:10:31.532851   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:31.532857   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:31.532905   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:31.569085   61935 cri.go:89] found id: ""
	I0915 08:10:31.569107   61935 logs.go:276] 0 containers: []
	W0915 08:10:31.569115   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:31.569121   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:31.569180   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:31.609363   61935 cri.go:89] found id: ""
	I0915 08:10:31.609390   61935 logs.go:276] 0 containers: []
	W0915 08:10:31.609411   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:31.609419   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:31.609485   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:31.644057   61935 cri.go:89] found id: ""
	I0915 08:10:31.644086   61935 logs.go:276] 0 containers: []
	W0915 08:10:31.644097   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:31.644104   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:31.644174   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:31.683664   61935 cri.go:89] found id: ""
	I0915 08:10:31.683696   61935 logs.go:276] 0 containers: []
	W0915 08:10:31.683708   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:31.683715   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:31.683777   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:31.721435   61935 cri.go:89] found id: ""
	I0915 08:10:31.721457   61935 logs.go:276] 0 containers: []
	W0915 08:10:31.721471   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:31.721483   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:31.721546   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:31.757769   61935 cri.go:89] found id: ""
	I0915 08:10:31.757794   61935 logs.go:276] 0 containers: []
	W0915 08:10:31.757803   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:31.757823   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:31.757876   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:31.794725   61935 cri.go:89] found id: ""
	I0915 08:10:31.794754   61935 logs.go:276] 0 containers: []
	W0915 08:10:31.794765   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:31.794776   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:31.794789   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:31.846244   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:31.846279   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:31.860010   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:31.860037   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:31.938865   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:31.938885   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:31.938898   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:32.023522   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:32.023550   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:32.510929   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:34.511454   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:34.218751   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:36.719136   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:34.565519   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:34.579429   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:34.579510   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:34.618741   61935 cri.go:89] found id: ""
	I0915 08:10:34.618767   61935 logs.go:276] 0 containers: []
	W0915 08:10:34.618783   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:34.618791   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:34.618857   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:34.654320   61935 cri.go:89] found id: ""
	I0915 08:10:34.654346   61935 logs.go:276] 0 containers: []
	W0915 08:10:34.654354   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:34.654360   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:34.654407   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:34.690118   61935 cri.go:89] found id: ""
	I0915 08:10:34.690142   61935 logs.go:276] 0 containers: []
	W0915 08:10:34.690159   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:34.690165   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:34.690224   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:34.726231   61935 cri.go:89] found id: ""
	I0915 08:10:34.726260   61935 logs.go:276] 0 containers: []
	W0915 08:10:34.726269   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:34.726274   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:34.726318   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:34.766303   61935 cri.go:89] found id: ""
	I0915 08:10:34.766323   61935 logs.go:276] 0 containers: []
	W0915 08:10:34.766332   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:34.766337   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:34.766382   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:34.801539   61935 cri.go:89] found id: ""
	I0915 08:10:34.801562   61935 logs.go:276] 0 containers: []
	W0915 08:10:34.801574   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:34.801580   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:34.801637   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:34.840605   61935 cri.go:89] found id: ""
	I0915 08:10:34.840631   61935 logs.go:276] 0 containers: []
	W0915 08:10:34.840642   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:34.840649   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:34.840706   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:34.880537   61935 cri.go:89] found id: ""
	I0915 08:10:34.880571   61935 logs.go:276] 0 containers: []
	W0915 08:10:34.880582   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:34.880593   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:34.880609   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:34.933226   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:34.933263   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:34.946556   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:34.946583   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:35.015227   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:35.015259   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:35.015276   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:35.096876   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:35.096909   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:37.638024   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:37.651711   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:37.651772   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:37.683878   61935 cri.go:89] found id: ""
	I0915 08:10:37.683905   61935 logs.go:276] 0 containers: []
	W0915 08:10:37.683915   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:37.683921   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:37.683970   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:37.718969   61935 cri.go:89] found id: ""
	I0915 08:10:37.718998   61935 logs.go:276] 0 containers: []
	W0915 08:10:37.719008   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:37.719015   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:37.719073   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:37.755051   61935 cri.go:89] found id: ""
	I0915 08:10:37.755079   61935 logs.go:276] 0 containers: []
	W0915 08:10:37.755090   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:37.755097   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:37.755164   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:37.791037   61935 cri.go:89] found id: ""
	I0915 08:10:37.791059   61935 logs.go:276] 0 containers: []
	W0915 08:10:37.791067   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:37.791072   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:37.791117   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:37.825153   61935 cri.go:89] found id: ""
	I0915 08:10:37.825182   61935 logs.go:276] 0 containers: []
	W0915 08:10:37.825194   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:37.825202   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:37.825254   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:37.858747   61935 cri.go:89] found id: ""
	I0915 08:10:37.858774   61935 logs.go:276] 0 containers: []
	W0915 08:10:37.858782   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:37.858789   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:37.858835   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:37.895478   61935 cri.go:89] found id: ""
	I0915 08:10:37.895507   61935 logs.go:276] 0 containers: []
	W0915 08:10:37.895517   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:37.895524   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:37.895598   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:37.011666   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:39.511134   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:38.719575   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:41.219510   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:37.931715   61935 cri.go:89] found id: ""
	I0915 08:10:37.931748   61935 logs.go:276] 0 containers: []
	W0915 08:10:37.931759   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:37.931770   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:37.931783   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:37.973224   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:37.973257   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:38.026588   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:38.026616   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:38.040046   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:38.040080   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:38.115670   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:38.115691   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:38.115706   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:40.693371   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:40.707572   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:40.707639   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:40.742439   61935 cri.go:89] found id: ""
	I0915 08:10:40.742460   61935 logs.go:276] 0 containers: []
	W0915 08:10:40.742468   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:40.742474   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:40.742521   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:40.776581   61935 cri.go:89] found id: ""
	I0915 08:10:40.776605   61935 logs.go:276] 0 containers: []
	W0915 08:10:40.776613   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:40.776618   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:40.776674   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:40.812114   61935 cri.go:89] found id: ""
	I0915 08:10:40.812145   61935 logs.go:276] 0 containers: []
	W0915 08:10:40.812157   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:40.812165   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:40.812222   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:40.845797   61935 cri.go:89] found id: ""
	I0915 08:10:40.845842   61935 logs.go:276] 0 containers: []
	W0915 08:10:40.845851   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:40.845857   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:40.845918   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:40.881466   61935 cri.go:89] found id: ""
	I0915 08:10:40.881494   61935 logs.go:276] 0 containers: []
	W0915 08:10:40.881503   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:40.881508   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:40.881558   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:40.919359   61935 cri.go:89] found id: ""
	I0915 08:10:40.919386   61935 logs.go:276] 0 containers: []
	W0915 08:10:40.919395   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:40.919400   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:40.919453   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:40.955022   61935 cri.go:89] found id: ""
	I0915 08:10:40.955053   61935 logs.go:276] 0 containers: []
	W0915 08:10:40.955064   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:40.955072   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:40.955131   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:40.989508   61935 cri.go:89] found id: ""
	I0915 08:10:40.989535   61935 logs.go:276] 0 containers: []
	W0915 08:10:40.989546   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:40.989558   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:40.989570   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:41.044143   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:41.044178   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:41.057665   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:41.057694   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:41.134085   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:41.134105   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:41.134116   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:41.211725   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:41.211755   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:41.511216   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:44.011958   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:43.718390   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:45.719146   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:43.752111   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:43.765413   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:43.765490   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:43.806768   61935 cri.go:89] found id: ""
	I0915 08:10:43.806786   61935 logs.go:276] 0 containers: []
	W0915 08:10:43.806794   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:43.806799   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:43.806855   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:43.840015   61935 cri.go:89] found id: ""
	I0915 08:10:43.840049   61935 logs.go:276] 0 containers: []
	W0915 08:10:43.840066   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:43.840073   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:43.840122   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:43.872917   61935 cri.go:89] found id: ""
	I0915 08:10:43.872939   61935 logs.go:276] 0 containers: []
	W0915 08:10:43.872947   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:43.872953   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:43.873020   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:43.905940   61935 cri.go:89] found id: ""
	I0915 08:10:43.905966   61935 logs.go:276] 0 containers: []
	W0915 08:10:43.905974   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:43.905980   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:43.906028   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:43.940807   61935 cri.go:89] found id: ""
	I0915 08:10:43.940833   61935 logs.go:276] 0 containers: []
	W0915 08:10:43.940841   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:43.940846   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:43.940895   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:43.974449   61935 cri.go:89] found id: ""
	I0915 08:10:43.974475   61935 logs.go:276] 0 containers: []
	W0915 08:10:43.974487   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:43.974498   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:43.974560   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:44.011503   61935 cri.go:89] found id: ""
	I0915 08:10:44.011526   61935 logs.go:276] 0 containers: []
	W0915 08:10:44.011534   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:44.011541   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:44.011595   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:44.054532   61935 cri.go:89] found id: ""
	I0915 08:10:44.054557   61935 logs.go:276] 0 containers: []
	W0915 08:10:44.054568   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:44.054579   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:44.054592   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:44.106783   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:44.106821   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:44.121354   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:44.121385   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:44.190616   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:44.190640   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:44.190658   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:44.267056   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:44.267098   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:46.805144   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:46.818657   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:46.818718   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:46.854629   61935 cri.go:89] found id: ""
	I0915 08:10:46.854658   61935 logs.go:276] 0 containers: []
	W0915 08:10:46.854670   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:46.854677   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:46.854737   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:46.888581   61935 cri.go:89] found id: ""
	I0915 08:10:46.888605   61935 logs.go:276] 0 containers: []
	W0915 08:10:46.888613   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:46.888620   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:46.888679   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:46.925231   61935 cri.go:89] found id: ""
	I0915 08:10:46.925255   61935 logs.go:276] 0 containers: []
	W0915 08:10:46.925264   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:46.925270   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:46.925319   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:46.959469   61935 cri.go:89] found id: ""
	I0915 08:10:46.959492   61935 logs.go:276] 0 containers: []
	W0915 08:10:46.959500   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:46.959507   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:46.959561   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:46.997365   61935 cri.go:89] found id: ""
	I0915 08:10:46.997392   61935 logs.go:276] 0 containers: []
	W0915 08:10:46.997403   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:46.997411   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:46.997470   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:47.033012   61935 cri.go:89] found id: ""
	I0915 08:10:47.033040   61935 logs.go:276] 0 containers: []
	W0915 08:10:47.033053   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:47.033061   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:47.033126   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:47.067991   61935 cri.go:89] found id: ""
	I0915 08:10:47.068016   61935 logs.go:276] 0 containers: []
	W0915 08:10:47.068024   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:47.068032   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:47.068081   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:47.105057   61935 cri.go:89] found id: ""
	I0915 08:10:47.105083   61935 logs.go:276] 0 containers: []
	W0915 08:10:47.105092   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:47.105099   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:47.105114   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:47.119251   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:47.119280   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:47.190118   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:47.190158   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:47.190172   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:47.269220   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:47.269252   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:47.307812   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:47.307845   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:46.510806   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:49.010556   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:48.219113   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:50.219182   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:52.719944   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:49.860200   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:49.874048   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:49.874102   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:49.912641   61935 cri.go:89] found id: ""
	I0915 08:10:49.912669   61935 logs.go:276] 0 containers: []
	W0915 08:10:49.912680   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:49.912688   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:49.912745   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:49.946603   61935 cri.go:89] found id: ""
	I0915 08:10:49.946629   61935 logs.go:276] 0 containers: []
	W0915 08:10:49.946637   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:49.946643   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:49.946691   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:49.986674   61935 cri.go:89] found id: ""
	I0915 08:10:49.986701   61935 logs.go:276] 0 containers: []
	W0915 08:10:49.986709   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:49.986715   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:49.986769   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:50.023077   61935 cri.go:89] found id: ""
	I0915 08:10:50.023098   61935 logs.go:276] 0 containers: []
	W0915 08:10:50.023106   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:50.023111   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:50.023164   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:50.062594   61935 cri.go:89] found id: ""
	I0915 08:10:50.062622   61935 logs.go:276] 0 containers: []
	W0915 08:10:50.062634   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:50.062641   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:50.062701   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:50.097669   61935 cri.go:89] found id: ""
	I0915 08:10:50.097691   61935 logs.go:276] 0 containers: []
	W0915 08:10:50.097699   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:50.097705   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:50.097752   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:50.135448   61935 cri.go:89] found id: ""
	I0915 08:10:50.135472   61935 logs.go:276] 0 containers: []
	W0915 08:10:50.135480   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:50.135486   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:50.135532   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:50.170951   61935 cri.go:89] found id: ""
	I0915 08:10:50.170978   61935 logs.go:276] 0 containers: []
	W0915 08:10:50.170986   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:50.170994   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:50.171004   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:50.212393   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:50.212420   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:50.264651   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:50.264686   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:50.278103   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:50.278131   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:50.348194   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:50.348220   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:50.348237   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:51.010706   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:53.510185   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:55.511417   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:55.218233   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:57.718254   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:52.933466   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:52.947027   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:52.947095   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:52.982097   61935 cri.go:89] found id: ""
	I0915 08:10:52.982125   61935 logs.go:276] 0 containers: []
	W0915 08:10:52.982136   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:52.982144   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:52.982244   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:53.019413   61935 cri.go:89] found id: ""
	I0915 08:10:53.019439   61935 logs.go:276] 0 containers: []
	W0915 08:10:53.019449   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:53.019456   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:53.019516   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:53.053821   61935 cri.go:89] found id: ""
	I0915 08:10:53.053851   61935 logs.go:276] 0 containers: []
	W0915 08:10:53.053861   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:53.053867   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:53.053931   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:53.089598   61935 cri.go:89] found id: ""
	I0915 08:10:53.089631   61935 logs.go:276] 0 containers: []
	W0915 08:10:53.089644   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:53.089650   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:53.089709   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:53.122721   61935 cri.go:89] found id: ""
	I0915 08:10:53.122751   61935 logs.go:276] 0 containers: []
	W0915 08:10:53.122763   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:53.122769   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:53.122827   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:53.174367   61935 cri.go:89] found id: ""
	I0915 08:10:53.174401   61935 logs.go:276] 0 containers: []
	W0915 08:10:53.174413   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:53.174421   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:53.174485   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:53.208279   61935 cri.go:89] found id: ""
	I0915 08:10:53.208301   61935 logs.go:276] 0 containers: []
	W0915 08:10:53.208309   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:53.208314   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:53.208368   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:53.244625   61935 cri.go:89] found id: ""
	I0915 08:10:53.244651   61935 logs.go:276] 0 containers: []
	W0915 08:10:53.244660   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:53.244668   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:53.244678   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:53.261008   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:53.261039   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:53.330310   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:53.330332   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:53.330344   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:53.410387   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:53.410426   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:53.448286   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:53.448310   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:56.000543   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:56.015008   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:56.015076   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:56.055957   61935 cri.go:89] found id: ""
	I0915 08:10:56.055984   61935 logs.go:276] 0 containers: []
	W0915 08:10:56.055993   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:56.055999   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:56.056043   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:56.091678   61935 cri.go:89] found id: ""
	I0915 08:10:56.091703   61935 logs.go:276] 0 containers: []
	W0915 08:10:56.091712   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:56.091717   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:56.091770   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:56.125261   61935 cri.go:89] found id: ""
	I0915 08:10:56.125286   61935 logs.go:276] 0 containers: []
	W0915 08:10:56.125297   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:56.125305   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:56.125362   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:56.161449   61935 cri.go:89] found id: ""
	I0915 08:10:56.161470   61935 logs.go:276] 0 containers: []
	W0915 08:10:56.161479   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:56.161484   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:56.161532   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:56.201563   61935 cri.go:89] found id: ""
	I0915 08:10:56.201590   61935 logs.go:276] 0 containers: []
	W0915 08:10:56.201601   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:56.201617   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:56.201689   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:56.236723   61935 cri.go:89] found id: ""
	I0915 08:10:56.236743   61935 logs.go:276] 0 containers: []
	W0915 08:10:56.236750   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:56.236756   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:56.236799   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:56.276576   61935 cri.go:89] found id: ""
	I0915 08:10:56.276608   61935 logs.go:276] 0 containers: []
	W0915 08:10:56.276619   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:56.276627   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:56.276689   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:56.309994   61935 cri.go:89] found id: ""
	I0915 08:10:56.310021   61935 logs.go:276] 0 containers: []
	W0915 08:10:56.310032   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:56.310043   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:56.310058   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:56.381278   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:56.381303   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:56.381319   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:56.464850   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:56.464888   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:10:56.501246   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:56.501271   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:56.551680   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:56.551708   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:58.010136   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:00.011050   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:59.719651   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:02.218755   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:10:59.065588   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:10:59.079063   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:10:59.079120   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:10:59.113863   61935 cri.go:89] found id: ""
	I0915 08:10:59.113890   61935 logs.go:276] 0 containers: []
	W0915 08:10:59.113900   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:10:59.113907   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:10:59.113974   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:10:59.150468   61935 cri.go:89] found id: ""
	I0915 08:10:59.150500   61935 logs.go:276] 0 containers: []
	W0915 08:10:59.150510   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:10:59.150517   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:10:59.150580   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:10:59.185685   61935 cri.go:89] found id: ""
	I0915 08:10:59.185707   61935 logs.go:276] 0 containers: []
	W0915 08:10:59.185715   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:10:59.185721   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:10:59.185777   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:10:59.223457   61935 cri.go:89] found id: ""
	I0915 08:10:59.223480   61935 logs.go:276] 0 containers: []
	W0915 08:10:59.223491   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:10:59.223497   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:10:59.223556   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:10:59.258188   61935 cri.go:89] found id: ""
	I0915 08:10:59.258219   61935 logs.go:276] 0 containers: []
	W0915 08:10:59.258231   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:10:59.258239   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:10:59.258300   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:10:59.296494   61935 cri.go:89] found id: ""
	I0915 08:10:59.296516   61935 logs.go:276] 0 containers: []
	W0915 08:10:59.296532   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:10:59.296540   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:10:59.296600   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:10:59.332169   61935 cri.go:89] found id: ""
	I0915 08:10:59.332193   61935 logs.go:276] 0 containers: []
	W0915 08:10:59.332201   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:10:59.332206   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:10:59.332262   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:10:59.367179   61935 cri.go:89] found id: ""
	I0915 08:10:59.367207   61935 logs.go:276] 0 containers: []
	W0915 08:10:59.367218   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:10:59.367228   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:10:59.367242   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:10:59.418444   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:10:59.418475   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:10:59.432055   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:10:59.432078   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:10:59.501246   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:10:59.501266   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:10:59.501280   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:10:59.592465   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:10:59.592499   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:11:02.156572   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:11:02.170052   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:11:02.170157   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:11:02.205081   61935 cri.go:89] found id: ""
	I0915 08:11:02.205105   61935 logs.go:276] 0 containers: []
	W0915 08:11:02.205116   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:11:02.205123   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:11:02.205194   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:11:02.241461   61935 cri.go:89] found id: ""
	I0915 08:11:02.241489   61935 logs.go:276] 0 containers: []
	W0915 08:11:02.241500   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:11:02.241507   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:11:02.241567   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:11:02.280607   61935 cri.go:89] found id: ""
	I0915 08:11:02.280639   61935 logs.go:276] 0 containers: []
	W0915 08:11:02.280650   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:11:02.280658   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:11:02.280736   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:11:02.315858   61935 cri.go:89] found id: ""
	I0915 08:11:02.315889   61935 logs.go:276] 0 containers: []
	W0915 08:11:02.315900   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:11:02.315908   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:11:02.315965   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:11:02.349820   61935 cri.go:89] found id: ""
	I0915 08:11:02.349845   61935 logs.go:276] 0 containers: []
	W0915 08:11:02.349853   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:11:02.349859   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:11:02.349912   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:11:02.387820   61935 cri.go:89] found id: ""
	I0915 08:11:02.387846   61935 logs.go:276] 0 containers: []
	W0915 08:11:02.387856   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:11:02.387861   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:11:02.387910   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:11:02.422485   61935 cri.go:89] found id: ""
	I0915 08:11:02.422512   61935 logs.go:276] 0 containers: []
	W0915 08:11:02.422523   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:11:02.422530   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:11:02.422595   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:11:02.460952   61935 cri.go:89] found id: ""
	I0915 08:11:02.460974   61935 logs.go:276] 0 containers: []
	W0915 08:11:02.460989   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:11:02.460999   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:11:02.461013   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:11:02.514262   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:11:02.514290   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:11:02.528401   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:11:02.528427   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:11:02.599829   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:11:02.599848   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:11:02.599862   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:11:02.675462   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:11:02.675501   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:11:02.511225   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:04.511327   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:04.219952   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:06.718964   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:05.216062   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:11:05.229709   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:11:05.229768   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:11:05.266186   61935 cri.go:89] found id: ""
	I0915 08:11:05.266214   61935 logs.go:276] 0 containers: []
	W0915 08:11:05.266229   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:11:05.266235   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:11:05.266282   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:11:05.306439   61935 cri.go:89] found id: ""
	I0915 08:11:05.306462   61935 logs.go:276] 0 containers: []
	W0915 08:11:05.306469   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:11:05.306475   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:11:05.306532   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:11:05.354816   61935 cri.go:89] found id: ""
	I0915 08:11:05.354847   61935 logs.go:276] 0 containers: []
	W0915 08:11:05.354858   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:11:05.354866   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:11:05.354927   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:11:05.407417   61935 cri.go:89] found id: ""
	I0915 08:11:05.407447   61935 logs.go:276] 0 containers: []
	W0915 08:11:05.407458   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:11:05.407466   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:11:05.407524   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:11:05.448134   61935 cri.go:89] found id: ""
	I0915 08:11:05.448160   61935 logs.go:276] 0 containers: []
	W0915 08:11:05.448169   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:11:05.448175   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:11:05.448224   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:11:05.481990   61935 cri.go:89] found id: ""
	I0915 08:11:05.482012   61935 logs.go:276] 0 containers: []
	W0915 08:11:05.482019   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:11:05.482026   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:11:05.482081   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:11:05.517129   61935 cri.go:89] found id: ""
	I0915 08:11:05.517152   61935 logs.go:276] 0 containers: []
	W0915 08:11:05.517161   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:11:05.517173   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:11:05.517234   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:11:05.555899   61935 cri.go:89] found id: ""
	I0915 08:11:05.555929   61935 logs.go:276] 0 containers: []
	W0915 08:11:05.555939   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:11:05.555948   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:11:05.555966   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:11:05.632030   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:11:05.632051   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:11:05.632066   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:11:05.708978   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:11:05.709011   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:11:05.748520   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:11:05.748546   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:11:05.800638   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:11:05.800672   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:11:07.011475   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:09.511282   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:08.720106   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:11.218663   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:08.314080   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:11:08.328778   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:11:08.328855   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:11:08.372467   61935 cri.go:89] found id: ""
	I0915 08:11:08.372490   61935 logs.go:276] 0 containers: []
	W0915 08:11:08.372498   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:11:08.372503   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:11:08.372549   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:11:08.408054   61935 cri.go:89] found id: ""
	I0915 08:11:08.408083   61935 logs.go:276] 0 containers: []
	W0915 08:11:08.408094   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:11:08.408102   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:11:08.408168   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:11:08.445046   61935 cri.go:89] found id: ""
	I0915 08:11:08.445068   61935 logs.go:276] 0 containers: []
	W0915 08:11:08.445077   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:11:08.445083   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:11:08.445141   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:11:08.482240   61935 cri.go:89] found id: ""
	I0915 08:11:08.482263   61935 logs.go:276] 0 containers: []
	W0915 08:11:08.482271   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:11:08.482276   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:11:08.482322   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:11:08.519420   61935 cri.go:89] found id: ""
	I0915 08:11:08.519442   61935 logs.go:276] 0 containers: []
	W0915 08:11:08.519451   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:11:08.519456   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:11:08.519501   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:11:08.553973   61935 cri.go:89] found id: ""
	I0915 08:11:08.554001   61935 logs.go:276] 0 containers: []
	W0915 08:11:08.554011   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:11:08.554018   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:11:08.554077   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:11:08.590690   61935 cri.go:89] found id: ""
	I0915 08:11:08.590717   61935 logs.go:276] 0 containers: []
	W0915 08:11:08.590727   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:11:08.590735   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:11:08.590799   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:11:08.625973   61935 cri.go:89] found id: ""
	I0915 08:11:08.626005   61935 logs.go:276] 0 containers: []
	W0915 08:11:08.626015   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:11:08.626027   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:11:08.626042   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:11:08.638977   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:11:08.639014   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:11:08.708297   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:11:08.708317   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:11:08.708332   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:11:08.795368   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:11:08.795413   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:11:08.837449   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:11:08.837478   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:11:11.392314   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:11:11.408657   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:11:11.408724   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:11:11.447349   61935 cri.go:89] found id: ""
	I0915 08:11:11.447375   61935 logs.go:276] 0 containers: []
	W0915 08:11:11.447386   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:11:11.447393   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:11:11.447467   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:11:11.486683   61935 cri.go:89] found id: ""
	I0915 08:11:11.486706   61935 logs.go:276] 0 containers: []
	W0915 08:11:11.486715   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:11:11.486720   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:11:11.486769   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:11:11.523768   61935 cri.go:89] found id: ""
	I0915 08:11:11.523805   61935 logs.go:276] 0 containers: []
	W0915 08:11:11.523816   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:11:11.523826   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:11:11.523883   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:11:11.559433   61935 cri.go:89] found id: ""
	I0915 08:11:11.559458   61935 logs.go:276] 0 containers: []
	W0915 08:11:11.559467   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:11:11.559472   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:11:11.559518   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:11:11.595663   61935 cri.go:89] found id: ""
	I0915 08:11:11.595691   61935 logs.go:276] 0 containers: []
	W0915 08:11:11.595702   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:11:11.595709   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:11:11.595772   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:11:11.631249   61935 cri.go:89] found id: ""
	I0915 08:11:11.631280   61935 logs.go:276] 0 containers: []
	W0915 08:11:11.631291   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:11:11.631298   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:11:11.631359   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:11:11.667671   61935 cri.go:89] found id: ""
	I0915 08:11:11.667697   61935 logs.go:276] 0 containers: []
	W0915 08:11:11.667706   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:11:11.667712   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:11:11.667759   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:11:11.708386   61935 cri.go:89] found id: ""
	I0915 08:11:11.708415   61935 logs.go:276] 0 containers: []
	W0915 08:11:11.708431   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:11:11.708441   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:11:11.708456   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:11:11.759926   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:11:11.759960   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:11:11.773576   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:11:11.773600   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:11:11.842283   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:11:11.842310   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:11:11.842324   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:11:11.922600   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:11:11.922638   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:11:11.512242   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:14.011241   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:13.219159   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:15.718592   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:17.719469   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:14.463595   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:11:14.476586   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:11:14.476656   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:11:14.510973   61935 cri.go:89] found id: ""
	I0915 08:11:14.510996   61935 logs.go:276] 0 containers: []
	W0915 08:11:14.511006   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:11:14.511013   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:11:14.511071   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:11:14.543978   61935 cri.go:89] found id: ""
	I0915 08:11:14.544002   61935 logs.go:276] 0 containers: []
	W0915 08:11:14.544013   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:11:14.544019   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:11:14.544078   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:11:14.577743   61935 cri.go:89] found id: ""
	I0915 08:11:14.577767   61935 logs.go:276] 0 containers: []
	W0915 08:11:14.577775   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:11:14.577781   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:11:14.577840   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:11:14.612666   61935 cri.go:89] found id: ""
	I0915 08:11:14.612690   61935 logs.go:276] 0 containers: []
	W0915 08:11:14.612701   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:11:14.612707   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:11:14.612767   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:11:14.646623   61935 cri.go:89] found id: ""
	I0915 08:11:14.646647   61935 logs.go:276] 0 containers: []
	W0915 08:11:14.646658   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:11:14.646666   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:11:14.646728   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:11:14.681633   61935 cri.go:89] found id: ""
	I0915 08:11:14.681662   61935 logs.go:276] 0 containers: []
	W0915 08:11:14.681672   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:11:14.681680   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:11:14.681741   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:11:14.714514   61935 cri.go:89] found id: ""
	I0915 08:11:14.714536   61935 logs.go:276] 0 containers: []
	W0915 08:11:14.714546   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:11:14.714553   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:11:14.714613   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:11:14.750800   61935 cri.go:89] found id: ""
	I0915 08:11:14.750827   61935 logs.go:276] 0 containers: []
	W0915 08:11:14.750837   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:11:14.750846   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:11:14.750860   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:11:14.802240   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:11:14.802278   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:11:14.815718   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:11:14.815740   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:11:14.885359   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:11:14.885380   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:11:14.885391   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:11:14.963717   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:11:14.963752   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:11:17.507050   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:11:17.521146   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:11:17.521213   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:11:17.554857   61935 cri.go:89] found id: ""
	I0915 08:11:17.554881   61935 logs.go:276] 0 containers: []
	W0915 08:11:17.554893   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:11:17.554901   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:11:17.554966   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:11:17.588168   61935 cri.go:89] found id: ""
	I0915 08:11:17.588198   61935 logs.go:276] 0 containers: []
	W0915 08:11:17.588215   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:11:17.588222   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:11:17.588284   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:11:17.622809   61935 cri.go:89] found id: ""
	I0915 08:11:17.622834   61935 logs.go:276] 0 containers: []
	W0915 08:11:17.622844   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:11:17.622852   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:11:17.622919   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:11:17.658089   61935 cri.go:89] found id: ""
	I0915 08:11:17.658115   61935 logs.go:276] 0 containers: []
	W0915 08:11:17.658123   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:11:17.658129   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:11:17.658184   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:11:17.690266   61935 cri.go:89] found id: ""
	I0915 08:11:17.690291   61935 logs.go:276] 0 containers: []
	W0915 08:11:17.690302   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:11:17.690308   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:11:17.690367   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:11:17.727848   61935 cri.go:89] found id: ""
	I0915 08:11:17.727875   61935 logs.go:276] 0 containers: []
	W0915 08:11:17.727886   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:11:17.727893   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:11:17.727973   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:11:17.762550   61935 cri.go:89] found id: ""
	I0915 08:11:17.762584   61935 logs.go:276] 0 containers: []
	W0915 08:11:17.762592   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:11:17.762598   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:11:17.762646   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:11:17.795475   61935 cri.go:89] found id: ""
	I0915 08:11:17.795501   61935 logs.go:276] 0 containers: []
	W0915 08:11:17.795509   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:11:17.795517   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:11:17.795528   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:11:17.833742   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:11:17.833773   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:11:17.885139   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:11:17.885182   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:11:16.510131   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:18.510715   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:20.218498   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:22.218763   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:17.900600   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:11:17.900627   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:11:17.972521   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:11:17.972548   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:11:17.972570   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:11:20.551005   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:11:20.564499   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:11:20.564559   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:11:20.602627   61935 cri.go:89] found id: ""
	I0915 08:11:20.602654   61935 logs.go:276] 0 containers: []
	W0915 08:11:20.602664   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:11:20.602672   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:11:20.602729   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:11:20.638957   61935 cri.go:89] found id: ""
	I0915 08:11:20.638982   61935 logs.go:276] 0 containers: []
	W0915 08:11:20.638994   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:11:20.639001   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:11:20.639069   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:11:20.676978   61935 cri.go:89] found id: ""
	I0915 08:11:20.677007   61935 logs.go:276] 0 containers: []
	W0915 08:11:20.677019   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:11:20.677026   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:11:20.677084   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:11:20.715455   61935 cri.go:89] found id: ""
	I0915 08:11:20.715484   61935 logs.go:276] 0 containers: []
	W0915 08:11:20.715494   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:11:20.715502   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:11:20.715561   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:11:20.751226   61935 cri.go:89] found id: ""
	I0915 08:11:20.751249   61935 logs.go:276] 0 containers: []
	W0915 08:11:20.751260   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:11:20.751268   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:11:20.751328   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:11:20.784686   61935 cri.go:89] found id: ""
	I0915 08:11:20.784708   61935 logs.go:276] 0 containers: []
	W0915 08:11:20.784716   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:11:20.784722   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:11:20.784768   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:11:20.820800   61935 cri.go:89] found id: ""
	I0915 08:11:20.820826   61935 logs.go:276] 0 containers: []
	W0915 08:11:20.820836   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:11:20.820843   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:11:20.820902   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:11:20.857728   61935 cri.go:89] found id: ""
	I0915 08:11:20.857752   61935 logs.go:276] 0 containers: []
	W0915 08:11:20.857761   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:11:20.857769   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:11:20.857781   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:11:20.870684   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:11:20.870708   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:11:20.942151   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:11:20.942173   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:11:20.942185   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:11:21.019766   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:11:21.019794   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:11:21.057799   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:11:21.057836   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:11:21.010329   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:23.011115   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:25.510540   61464 pod_ready.go:103] pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:24.718389   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:27.220641   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:23.610018   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:11:23.623375   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:11:23.623433   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:11:23.657952   61935 cri.go:89] found id: ""
	I0915 08:11:23.657979   61935 logs.go:276] 0 containers: []
	W0915 08:11:23.657991   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:11:23.657998   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:11:23.658059   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:11:23.691611   61935 cri.go:89] found id: ""
	I0915 08:11:23.691634   61935 logs.go:276] 0 containers: []
	W0915 08:11:23.691642   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:11:23.691648   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:11:23.691695   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:11:23.731848   61935 cri.go:89] found id: ""
	I0915 08:11:23.731879   61935 logs.go:276] 0 containers: []
	W0915 08:11:23.731890   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:11:23.731898   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:11:23.731959   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:11:23.765541   61935 cri.go:89] found id: ""
	I0915 08:11:23.765569   61935 logs.go:276] 0 containers: []
	W0915 08:11:23.765580   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:11:23.765588   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:11:23.765642   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:11:23.799530   61935 cri.go:89] found id: ""
	I0915 08:11:23.799557   61935 logs.go:276] 0 containers: []
	W0915 08:11:23.799568   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:11:23.799575   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:11:23.799641   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:11:23.843502   61935 cri.go:89] found id: ""
	I0915 08:11:23.843523   61935 logs.go:276] 0 containers: []
	W0915 08:11:23.843531   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:11:23.843537   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:11:23.843660   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:11:23.877410   61935 cri.go:89] found id: ""
	I0915 08:11:23.877436   61935 logs.go:276] 0 containers: []
	W0915 08:11:23.877448   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:11:23.877455   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:11:23.877520   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:11:23.918787   61935 cri.go:89] found id: ""
	I0915 08:11:23.918813   61935 logs.go:276] 0 containers: []
	W0915 08:11:23.918821   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:11:23.918829   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:11:23.918840   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:11:24.000608   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:11:24.000643   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:11:24.042838   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:11:24.042864   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:11:24.096787   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:11:24.096822   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:11:24.110835   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:11:24.110861   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:11:24.184141   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:11:26.684510   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:11:26.697651   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:11:26.697709   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:11:26.731759   61935 cri.go:89] found id: ""
	I0915 08:11:26.731780   61935 logs.go:276] 0 containers: []
	W0915 08:11:26.731791   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:11:26.731798   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:11:26.731859   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:11:26.768304   61935 cri.go:89] found id: ""
	I0915 08:11:26.768325   61935 logs.go:276] 0 containers: []
	W0915 08:11:26.768333   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:11:26.768339   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:11:26.768385   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:11:26.806164   61935 cri.go:89] found id: ""
	I0915 08:11:26.806199   61935 logs.go:276] 0 containers: []
	W0915 08:11:26.806211   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:11:26.806219   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:11:26.806271   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:11:26.842418   61935 cri.go:89] found id: ""
	I0915 08:11:26.842446   61935 logs.go:276] 0 containers: []
	W0915 08:11:26.842456   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:11:26.842464   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:11:26.842531   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:11:26.876659   61935 cri.go:89] found id: ""
	I0915 08:11:26.876685   61935 logs.go:276] 0 containers: []
	W0915 08:11:26.876696   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:11:26.876703   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:11:26.876767   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:11:26.910689   61935 cri.go:89] found id: ""
	I0915 08:11:26.910714   61935 logs.go:276] 0 containers: []
	W0915 08:11:26.910722   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:11:26.910729   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:11:26.910783   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:11:26.947359   61935 cri.go:89] found id: ""
	I0915 08:11:26.947381   61935 logs.go:276] 0 containers: []
	W0915 08:11:26.947392   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:11:26.947399   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:11:26.947454   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:11:26.982572   61935 cri.go:89] found id: ""
	I0915 08:11:26.982600   61935 logs.go:276] 0 containers: []
	W0915 08:11:26.982610   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:11:26.982621   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:11:26.982636   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:11:27.056711   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:11:27.056734   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:11:27.056754   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:11:27.149425   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:11:27.149458   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:11:27.191425   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:11:27.191455   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:11:27.250474   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:11:27.250510   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:11:27.012084   61464 pod_ready.go:82] duration metric: took 4m0.007790311s for pod "metrics-server-6867b74b74-mh8xh" in "kube-system" namespace to be "Ready" ...
	E0915 08:11:27.012115   61464 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0915 08:11:27.012125   61464 pod_ready.go:39] duration metric: took 4m4.569278108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 08:11:27.012151   61464 api_server.go:52] waiting for apiserver process to appear ...
	I0915 08:11:27.012196   61464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:11:27.012254   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:11:27.071558   61464 cri.go:89] found id: "ad9f5e86e0c0fc4ab661290de56d156c3c4d3607a9e24c0aacfae003d6d5419e"
	I0915 08:11:27.071582   61464 cri.go:89] found id: ""
	I0915 08:11:27.071590   61464 logs.go:276] 1 containers: [ad9f5e86e0c0fc4ab661290de56d156c3c4d3607a9e24c0aacfae003d6d5419e]
	I0915 08:11:27.071659   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:27.076670   61464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:11:27.076732   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:11:27.121754   61464 cri.go:89] found id: "d567034bb764a32afa10e27321fe733854e4a3ba4b956393f21590f9dda88bd1"
	I0915 08:11:27.121780   61464 cri.go:89] found id: ""
	I0915 08:11:27.121790   61464 logs.go:276] 1 containers: [d567034bb764a32afa10e27321fe733854e4a3ba4b956393f21590f9dda88bd1]
	I0915 08:11:27.121863   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:27.127831   61464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:11:27.127904   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:11:27.170452   61464 cri.go:89] found id: "24df3462da8d6f4e38affd3a9c1deea1b655b427cedc04a41e50e0f9e813a63d"
	I0915 08:11:27.170489   61464 cri.go:89] found id: ""
	I0915 08:11:27.170500   61464 logs.go:276] 1 containers: [24df3462da8d6f4e38affd3a9c1deea1b655b427cedc04a41e50e0f9e813a63d]
	I0915 08:11:27.170564   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:27.175180   61464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:11:27.175262   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:11:27.215900   61464 cri.go:89] found id: "8addb762fc7555621576c18cbc1c4ef7145c6cd83370f8f1c69431367d135d30"
	I0915 08:11:27.215934   61464 cri.go:89] found id: ""
	I0915 08:11:27.215944   61464 logs.go:276] 1 containers: [8addb762fc7555621576c18cbc1c4ef7145c6cd83370f8f1c69431367d135d30]
	I0915 08:11:27.216004   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:27.221593   61464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:11:27.221650   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:11:27.261223   61464 cri.go:89] found id: "5cfa34541ada928716cf14a0f64adab0b836fae818dfcc06e393a9d9f8e01fac"
	I0915 08:11:27.261251   61464 cri.go:89] found id: ""
	I0915 08:11:27.261261   61464 logs.go:276] 1 containers: [5cfa34541ada928716cf14a0f64adab0b836fae818dfcc06e393a9d9f8e01fac]
	I0915 08:11:27.261318   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:27.265563   61464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:11:27.265631   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:11:27.306702   61464 cri.go:89] found id: "7c321eec0ceba6e03e6159aa4b7b25d172c596aff5a6efbd0cab980a42ba37c6"
	I0915 08:11:27.306727   61464 cri.go:89] found id: ""
	I0915 08:11:27.306737   61464 logs.go:276] 1 containers: [7c321eec0ceba6e03e6159aa4b7b25d172c596aff5a6efbd0cab980a42ba37c6]
	I0915 08:11:27.306794   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:27.311102   61464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:11:27.311160   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:11:27.346785   61464 cri.go:89] found id: ""
	I0915 08:11:27.346809   61464 logs.go:276] 0 containers: []
	W0915 08:11:27.346817   61464 logs.go:278] No container was found matching "kindnet"
	I0915 08:11:27.346822   61464 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:11:27.346872   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:11:27.381527   61464 cri.go:89] found id: "d19594d184fa290ba7e5de5af4139a66435db94cb57f3b230d34f025e7b304bf"
	I0915 08:11:27.381546   61464 cri.go:89] found id: "3b97d136ff744e2f9c8d97cb6243ca451674b7b637c85078dc5128de02e6a621"
	I0915 08:11:27.381550   61464 cri.go:89] found id: ""
	I0915 08:11:27.381557   61464 logs.go:276] 2 containers: [d19594d184fa290ba7e5de5af4139a66435db94cb57f3b230d34f025e7b304bf 3b97d136ff744e2f9c8d97cb6243ca451674b7b637c85078dc5128de02e6a621]
	I0915 08:11:27.381616   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:27.386113   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:27.389882   61464 logs.go:123] Gathering logs for kubelet ...
	I0915 08:11:27.389898   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:11:27.458363   61464 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:11:27.458393   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 08:11:27.584877   61464 logs.go:123] Gathering logs for etcd [d567034bb764a32afa10e27321fe733854e4a3ba4b956393f21590f9dda88bd1] ...
	I0915 08:11:27.584909   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d567034bb764a32afa10e27321fe733854e4a3ba4b956393f21590f9dda88bd1"
	I0915 08:11:27.627872   61464 logs.go:123] Gathering logs for coredns [24df3462da8d6f4e38affd3a9c1deea1b655b427cedc04a41e50e0f9e813a63d] ...
	I0915 08:11:27.627904   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24df3462da8d6f4e38affd3a9c1deea1b655b427cedc04a41e50e0f9e813a63d"
	I0915 08:11:27.667186   61464 logs.go:123] Gathering logs for kube-scheduler [8addb762fc7555621576c18cbc1c4ef7145c6cd83370f8f1c69431367d135d30] ...
	I0915 08:11:27.667249   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8addb762fc7555621576c18cbc1c4ef7145c6cd83370f8f1c69431367d135d30"
	I0915 08:11:27.709542   61464 logs.go:123] Gathering logs for kube-proxy [5cfa34541ada928716cf14a0f64adab0b836fae818dfcc06e393a9d9f8e01fac] ...
	I0915 08:11:27.709570   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cfa34541ada928716cf14a0f64adab0b836fae818dfcc06e393a9d9f8e01fac"
	I0915 08:11:27.761963   61464 logs.go:123] Gathering logs for storage-provisioner [d19594d184fa290ba7e5de5af4139a66435db94cb57f3b230d34f025e7b304bf] ...
	I0915 08:11:27.761991   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d19594d184fa290ba7e5de5af4139a66435db94cb57f3b230d34f025e7b304bf"
	I0915 08:11:27.798680   61464 logs.go:123] Gathering logs for container status ...
	I0915 08:11:27.798712   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:11:27.839002   61464 logs.go:123] Gathering logs for dmesg ...
	I0915 08:11:27.839029   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:11:27.853989   61464 logs.go:123] Gathering logs for kube-apiserver [ad9f5e86e0c0fc4ab661290de56d156c3c4d3607a9e24c0aacfae003d6d5419e] ...
	I0915 08:11:27.854019   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad9f5e86e0c0fc4ab661290de56d156c3c4d3607a9e24c0aacfae003d6d5419e"
	I0915 08:11:27.909877   61464 logs.go:123] Gathering logs for kube-controller-manager [7c321eec0ceba6e03e6159aa4b7b25d172c596aff5a6efbd0cab980a42ba37c6] ...
	I0915 08:11:27.909919   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c321eec0ceba6e03e6159aa4b7b25d172c596aff5a6efbd0cab980a42ba37c6"
	I0915 08:11:27.975462   61464 logs.go:123] Gathering logs for storage-provisioner [3b97d136ff744e2f9c8d97cb6243ca451674b7b637c85078dc5128de02e6a621] ...
	I0915 08:11:27.975493   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b97d136ff744e2f9c8d97cb6243ca451674b7b637c85078dc5128de02e6a621"
	I0915 08:11:28.019824   61464 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:11:28.019847   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:11:30.980118   61464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:11:30.995976   61464 api_server.go:72] duration metric: took 4m15.812135206s to wait for apiserver process to appear ...
	I0915 08:11:30.995999   61464 api_server.go:88] waiting for apiserver healthz status ...
	I0915 08:11:30.996048   61464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:11:30.996109   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:11:29.719131   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:31.719430   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:29.766998   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:11:29.780114   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:11:29.780188   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:11:29.818020   61935 cri.go:89] found id: ""
	I0915 08:11:29.818050   61935 logs.go:276] 0 containers: []
	W0915 08:11:29.818057   61935 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:11:29.818063   61935 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:11:29.818114   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:11:29.857392   61935 cri.go:89] found id: ""
	I0915 08:11:29.857415   61935 logs.go:276] 0 containers: []
	W0915 08:11:29.857423   61935 logs.go:278] No container was found matching "etcd"
	I0915 08:11:29.857428   61935 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:11:29.857493   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:11:29.890862   61935 cri.go:89] found id: ""
	I0915 08:11:29.890886   61935 logs.go:276] 0 containers: []
	W0915 08:11:29.890895   61935 logs.go:278] No container was found matching "coredns"
	I0915 08:11:29.890900   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:11:29.890955   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:11:29.926792   61935 cri.go:89] found id: ""
	I0915 08:11:29.926826   61935 logs.go:276] 0 containers: []
	W0915 08:11:29.926837   61935 logs.go:278] No container was found matching "kube-scheduler"
	I0915 08:11:29.926845   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:11:29.926903   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:11:29.961703   61935 cri.go:89] found id: ""
	I0915 08:11:29.961726   61935 logs.go:276] 0 containers: []
	W0915 08:11:29.961733   61935 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:11:29.961738   61935 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:11:29.961785   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:11:29.997799   61935 cri.go:89] found id: ""
	I0915 08:11:29.997849   61935 logs.go:276] 0 containers: []
	W0915 08:11:29.997862   61935 logs.go:278] No container was found matching "kube-controller-manager"
	I0915 08:11:29.997869   61935 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:11:29.997931   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:11:30.035615   61935 cri.go:89] found id: ""
	I0915 08:11:30.035646   61935 logs.go:276] 0 containers: []
	W0915 08:11:30.035657   61935 logs.go:278] No container was found matching "kindnet"
	I0915 08:11:30.035664   61935 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0915 08:11:30.035736   61935 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0915 08:11:30.071038   61935 cri.go:89] found id: ""
	I0915 08:11:30.071065   61935 logs.go:276] 0 containers: []
	W0915 08:11:30.071073   61935 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0915 08:11:30.071087   61935 logs.go:123] Gathering logs for kubelet ...
	I0915 08:11:30.071099   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:11:30.123474   61935 logs.go:123] Gathering logs for dmesg ...
	I0915 08:11:30.123509   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:11:30.137029   61935 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:11:30.137053   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:11:30.213400   61935 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:11:30.213429   61935 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:11:30.213443   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:11:30.292711   61935 logs.go:123] Gathering logs for container status ...
	I0915 08:11:30.292754   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:11:32.832146   61935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:11:32.845699   61935 kubeadm.go:597] duration metric: took 4m3.279455804s to restartPrimaryControlPlane
	W0915 08:11:32.845789   61935 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0915 08:11:32.845830   61935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0915 08:11:33.699175   61935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 08:11:33.714268   61935 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 08:11:33.726783   61935 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 08:11:33.736884   61935 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 08:11:33.736908   61935 kubeadm.go:157] found existing configuration files:
	
	I0915 08:11:33.736951   61935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 08:11:33.746980   61935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 08:11:33.747053   61935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 08:11:33.758081   61935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 08:11:33.768254   61935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 08:11:33.768321   61935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 08:11:33.778480   61935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 08:11:33.787735   61935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 08:11:33.787804   61935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 08:11:33.797592   61935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 08:11:33.807624   61935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 08:11:33.807707   61935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 08:11:33.817715   61935 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 08:11:33.883618   61935 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0915 08:11:33.883744   61935 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 08:11:34.033436   61935 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 08:11:34.033625   61935 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 08:11:34.033763   61935 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0915 08:11:34.221435   61935 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 08:11:34.223673   61935 out.go:235]   - Generating certificates and keys ...
	I0915 08:11:34.223803   61935 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 08:11:34.223925   61935 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 08:11:34.224049   61935 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0915 08:11:34.224129   61935 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0915 08:11:34.224214   61935 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0915 08:11:34.224285   61935 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0915 08:11:34.224361   61935 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0915 08:11:34.224723   61935 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0915 08:11:34.225139   61935 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0915 08:11:34.225571   61935 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0915 08:11:34.225695   61935 kubeadm.go:310] [certs] Using the existing "sa" key
	I0915 08:11:34.225829   61935 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 08:11:34.556956   61935 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 08:11:34.808401   61935 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 08:11:35.154958   61935 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 08:11:35.313561   61935 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 08:11:35.329032   61935 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 08:11:35.330285   61935 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 08:11:35.330368   61935 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 08:11:35.478100   61935 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 08:11:31.035219   61464 cri.go:89] found id: "ad9f5e86e0c0fc4ab661290de56d156c3c4d3607a9e24c0aacfae003d6d5419e"
	I0915 08:11:31.035251   61464 cri.go:89] found id: ""
	I0915 08:11:31.035261   61464 logs.go:276] 1 containers: [ad9f5e86e0c0fc4ab661290de56d156c3c4d3607a9e24c0aacfae003d6d5419e]
	I0915 08:11:31.035327   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:31.039348   61464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:11:31.039408   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:11:31.076519   61464 cri.go:89] found id: "d567034bb764a32afa10e27321fe733854e4a3ba4b956393f21590f9dda88bd1"
	I0915 08:11:31.076544   61464 cri.go:89] found id: ""
	I0915 08:11:31.076552   61464 logs.go:276] 1 containers: [d567034bb764a32afa10e27321fe733854e4a3ba4b956393f21590f9dda88bd1]
	I0915 08:11:31.076599   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:31.081393   61464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:11:31.081458   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:11:31.116592   61464 cri.go:89] found id: "24df3462da8d6f4e38affd3a9c1deea1b655b427cedc04a41e50e0f9e813a63d"
	I0915 08:11:31.116617   61464 cri.go:89] found id: ""
	I0915 08:11:31.116626   61464 logs.go:276] 1 containers: [24df3462da8d6f4e38affd3a9c1deea1b655b427cedc04a41e50e0f9e813a63d]
	I0915 08:11:31.116673   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:31.121843   61464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:11:31.121915   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:11:31.157697   61464 cri.go:89] found id: "8addb762fc7555621576c18cbc1c4ef7145c6cd83370f8f1c69431367d135d30"
	I0915 08:11:31.157718   61464 cri.go:89] found id: ""
	I0915 08:11:31.157727   61464 logs.go:276] 1 containers: [8addb762fc7555621576c18cbc1c4ef7145c6cd83370f8f1c69431367d135d30]
	I0915 08:11:31.157789   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:31.162156   61464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:11:31.162222   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:11:31.199135   61464 cri.go:89] found id: "5cfa34541ada928716cf14a0f64adab0b836fae818dfcc06e393a9d9f8e01fac"
	I0915 08:11:31.199160   61464 cri.go:89] found id: ""
	I0915 08:11:31.199167   61464 logs.go:276] 1 containers: [5cfa34541ada928716cf14a0f64adab0b836fae818dfcc06e393a9d9f8e01fac]
	I0915 08:11:31.199237   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:31.203759   61464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:11:31.203826   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:11:31.242506   61464 cri.go:89] found id: "7c321eec0ceba6e03e6159aa4b7b25d172c596aff5a6efbd0cab980a42ba37c6"
	I0915 08:11:31.242531   61464 cri.go:89] found id: ""
	I0915 08:11:31.242540   61464 logs.go:276] 1 containers: [7c321eec0ceba6e03e6159aa4b7b25d172c596aff5a6efbd0cab980a42ba37c6]
	I0915 08:11:31.242604   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:31.246924   61464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:11:31.246993   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:11:31.284514   61464 cri.go:89] found id: ""
	I0915 08:11:31.284537   61464 logs.go:276] 0 containers: []
	W0915 08:11:31.284548   61464 logs.go:278] No container was found matching "kindnet"
	I0915 08:11:31.284555   61464 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:11:31.284615   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:11:31.323835   61464 cri.go:89] found id: "d19594d184fa290ba7e5de5af4139a66435db94cb57f3b230d34f025e7b304bf"
	I0915 08:11:31.323862   61464 cri.go:89] found id: "3b97d136ff744e2f9c8d97cb6243ca451674b7b637c85078dc5128de02e6a621"
	I0915 08:11:31.323866   61464 cri.go:89] found id: ""
	I0915 08:11:31.323872   61464 logs.go:276] 2 containers: [d19594d184fa290ba7e5de5af4139a66435db94cb57f3b230d34f025e7b304bf 3b97d136ff744e2f9c8d97cb6243ca451674b7b637c85078dc5128de02e6a621]
	I0915 08:11:31.323919   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:31.328102   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:31.332160   61464 logs.go:123] Gathering logs for kubelet ...
	I0915 08:11:31.332181   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:11:31.399554   61464 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:11:31.399589   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 08:11:31.512615   61464 logs.go:123] Gathering logs for coredns [24df3462da8d6f4e38affd3a9c1deea1b655b427cedc04a41e50e0f9e813a63d] ...
	I0915 08:11:31.512657   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24df3462da8d6f4e38affd3a9c1deea1b655b427cedc04a41e50e0f9e813a63d"
	I0915 08:11:31.549925   61464 logs.go:123] Gathering logs for storage-provisioner [d19594d184fa290ba7e5de5af4139a66435db94cb57f3b230d34f025e7b304bf] ...
	I0915 08:11:31.549957   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d19594d184fa290ba7e5de5af4139a66435db94cb57f3b230d34f025e7b304bf"
	I0915 08:11:31.594201   61464 logs.go:123] Gathering logs for storage-provisioner [3b97d136ff744e2f9c8d97cb6243ca451674b7b637c85078dc5128de02e6a621] ...
	I0915 08:11:31.594226   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b97d136ff744e2f9c8d97cb6243ca451674b7b637c85078dc5128de02e6a621"
	I0915 08:11:31.630217   61464 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:11:31.630248   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:11:32.057462   61464 logs.go:123] Gathering logs for container status ...
	I0915 08:11:32.057499   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:11:32.106649   61464 logs.go:123] Gathering logs for dmesg ...
	I0915 08:11:32.106682   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:11:32.122534   61464 logs.go:123] Gathering logs for kube-apiserver [ad9f5e86e0c0fc4ab661290de56d156c3c4d3607a9e24c0aacfae003d6d5419e] ...
	I0915 08:11:32.122565   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad9f5e86e0c0fc4ab661290de56d156c3c4d3607a9e24c0aacfae003d6d5419e"
	I0915 08:11:32.163827   61464 logs.go:123] Gathering logs for etcd [d567034bb764a32afa10e27321fe733854e4a3ba4b956393f21590f9dda88bd1] ...
	I0915 08:11:32.163855   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d567034bb764a32afa10e27321fe733854e4a3ba4b956393f21590f9dda88bd1"
	I0915 08:11:32.207670   61464 logs.go:123] Gathering logs for kube-scheduler [8addb762fc7555621576c18cbc1c4ef7145c6cd83370f8f1c69431367d135d30] ...
	I0915 08:11:32.207702   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8addb762fc7555621576c18cbc1c4ef7145c6cd83370f8f1c69431367d135d30"
	I0915 08:11:32.246535   61464 logs.go:123] Gathering logs for kube-proxy [5cfa34541ada928716cf14a0f64adab0b836fae818dfcc06e393a9d9f8e01fac] ...
	I0915 08:11:32.246562   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cfa34541ada928716cf14a0f64adab0b836fae818dfcc06e393a9d9f8e01fac"
	I0915 08:11:32.287981   61464 logs.go:123] Gathering logs for kube-controller-manager [7c321eec0ceba6e03e6159aa4b7b25d172c596aff5a6efbd0cab980a42ba37c6] ...
	I0915 08:11:32.288010   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c321eec0ceba6e03e6159aa4b7b25d172c596aff5a6efbd0cab980a42ba37c6"
	I0915 08:11:34.843881   61464 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0915 08:11:34.850068   61464 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0915 08:11:34.851103   61464 api_server.go:141] control plane version: v1.31.1
	I0915 08:11:34.851123   61464 api_server.go:131] duration metric: took 3.85511648s to wait for apiserver health ...
	I0915 08:11:34.851130   61464 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 08:11:34.851150   61464 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:11:34.851200   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:11:34.893053   61464 cri.go:89] found id: "ad9f5e86e0c0fc4ab661290de56d156c3c4d3607a9e24c0aacfae003d6d5419e"
	I0915 08:11:34.893079   61464 cri.go:89] found id: ""
	I0915 08:11:34.893088   61464 logs.go:276] 1 containers: [ad9f5e86e0c0fc4ab661290de56d156c3c4d3607a9e24c0aacfae003d6d5419e]
	I0915 08:11:34.893133   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:34.897447   61464 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:11:34.897517   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:11:34.935408   61464 cri.go:89] found id: "d567034bb764a32afa10e27321fe733854e4a3ba4b956393f21590f9dda88bd1"
	I0915 08:11:34.935432   61464 cri.go:89] found id: ""
	I0915 08:11:34.935442   61464 logs.go:276] 1 containers: [d567034bb764a32afa10e27321fe733854e4a3ba4b956393f21590f9dda88bd1]
	I0915 08:11:34.935494   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:34.940415   61464 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:11:34.940483   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:11:34.979770   61464 cri.go:89] found id: "24df3462da8d6f4e38affd3a9c1deea1b655b427cedc04a41e50e0f9e813a63d"
	I0915 08:11:34.979797   61464 cri.go:89] found id: ""
	I0915 08:11:34.979806   61464 logs.go:276] 1 containers: [24df3462da8d6f4e38affd3a9c1deea1b655b427cedc04a41e50e0f9e813a63d]
	I0915 08:11:34.979868   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:34.984435   61464 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:11:34.984505   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:11:35.037462   61464 cri.go:89] found id: "8addb762fc7555621576c18cbc1c4ef7145c6cd83370f8f1c69431367d135d30"
	I0915 08:11:35.037498   61464 cri.go:89] found id: ""
	I0915 08:11:35.037515   61464 logs.go:276] 1 containers: [8addb762fc7555621576c18cbc1c4ef7145c6cd83370f8f1c69431367d135d30]
	I0915 08:11:35.037572   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:35.042655   61464 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:11:35.042736   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:11:35.083401   61464 cri.go:89] found id: "5cfa34541ada928716cf14a0f64adab0b836fae818dfcc06e393a9d9f8e01fac"
	I0915 08:11:35.083422   61464 cri.go:89] found id: ""
	I0915 08:11:35.083431   61464 logs.go:276] 1 containers: [5cfa34541ada928716cf14a0f64adab0b836fae818dfcc06e393a9d9f8e01fac]
	I0915 08:11:35.083494   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:35.088118   61464 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:11:35.088183   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:11:35.123725   61464 cri.go:89] found id: "7c321eec0ceba6e03e6159aa4b7b25d172c596aff5a6efbd0cab980a42ba37c6"
	I0915 08:11:35.123751   61464 cri.go:89] found id: ""
	I0915 08:11:35.123761   61464 logs.go:276] 1 containers: [7c321eec0ceba6e03e6159aa4b7b25d172c596aff5a6efbd0cab980a42ba37c6]
	I0915 08:11:35.123807   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:35.128198   61464 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:11:35.128264   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:11:35.165172   61464 cri.go:89] found id: ""
	I0915 08:11:35.165202   61464 logs.go:276] 0 containers: []
	W0915 08:11:35.165213   61464 logs.go:278] No container was found matching "kindnet"
	I0915 08:11:35.165221   61464 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:11:35.165277   61464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:11:35.205323   61464 cri.go:89] found id: "d19594d184fa290ba7e5de5af4139a66435db94cb57f3b230d34f025e7b304bf"
	I0915 08:11:35.205348   61464 cri.go:89] found id: "3b97d136ff744e2f9c8d97cb6243ca451674b7b637c85078dc5128de02e6a621"
	I0915 08:11:35.205353   61464 cri.go:89] found id: ""
	I0915 08:11:35.205363   61464 logs.go:276] 2 containers: [d19594d184fa290ba7e5de5af4139a66435db94cb57f3b230d34f025e7b304bf 3b97d136ff744e2f9c8d97cb6243ca451674b7b637c85078dc5128de02e6a621]
	I0915 08:11:35.205423   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:35.209506   61464 ssh_runner.go:195] Run: which crictl
	I0915 08:11:35.214982   61464 logs.go:123] Gathering logs for etcd [d567034bb764a32afa10e27321fe733854e4a3ba4b956393f21590f9dda88bd1] ...
	I0915 08:11:35.215012   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d567034bb764a32afa10e27321fe733854e4a3ba4b956393f21590f9dda88bd1"
	I0915 08:11:35.266336   61464 logs.go:123] Gathering logs for coredns [24df3462da8d6f4e38affd3a9c1deea1b655b427cedc04a41e50e0f9e813a63d] ...
	I0915 08:11:35.266365   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24df3462da8d6f4e38affd3a9c1deea1b655b427cedc04a41e50e0f9e813a63d"
	I0915 08:11:35.312260   61464 logs.go:123] Gathering logs for kube-proxy [5cfa34541ada928716cf14a0f64adab0b836fae818dfcc06e393a9d9f8e01fac] ...
	I0915 08:11:35.312294   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cfa34541ada928716cf14a0f64adab0b836fae818dfcc06e393a9d9f8e01fac"
	I0915 08:11:35.350171   61464 logs.go:123] Gathering logs for kube-controller-manager [7c321eec0ceba6e03e6159aa4b7b25d172c596aff5a6efbd0cab980a42ba37c6] ...
	I0915 08:11:35.350215   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c321eec0ceba6e03e6159aa4b7b25d172c596aff5a6efbd0cab980a42ba37c6"
	I0915 08:11:35.418592   61464 logs.go:123] Gathering logs for storage-provisioner [d19594d184fa290ba7e5de5af4139a66435db94cb57f3b230d34f025e7b304bf] ...
	I0915 08:11:35.418640   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d19594d184fa290ba7e5de5af4139a66435db94cb57f3b230d34f025e7b304bf"
	I0915 08:11:35.468880   61464 logs.go:123] Gathering logs for kubelet ...
	I0915 08:11:35.468912   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:11:35.546352   61464 logs.go:123] Gathering logs for dmesg ...
	I0915 08:11:35.546386   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:11:35.561879   61464 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:11:35.561907   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 08:11:35.673756   61464 logs.go:123] Gathering logs for storage-provisioner [3b97d136ff744e2f9c8d97cb6243ca451674b7b637c85078dc5128de02e6a621] ...
	I0915 08:11:35.673787   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b97d136ff744e2f9c8d97cb6243ca451674b7b637c85078dc5128de02e6a621"
	I0915 08:11:35.712489   61464 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:11:35.712517   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:11:33.722258   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:36.220167   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:35.480122   61935 out.go:235]   - Booting up control plane ...
	I0915 08:11:35.480271   61935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 08:11:35.485105   61935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 08:11:35.494290   61935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 08:11:35.495763   61935 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 08:11:35.499908   61935 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0915 08:11:36.107580   61464 logs.go:123] Gathering logs for container status ...
	I0915 08:11:36.107625   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:11:36.150573   61464 logs.go:123] Gathering logs for kube-apiserver [ad9f5e86e0c0fc4ab661290de56d156c3c4d3607a9e24c0aacfae003d6d5419e] ...
	I0915 08:11:36.150604   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad9f5e86e0c0fc4ab661290de56d156c3c4d3607a9e24c0aacfae003d6d5419e"
	I0915 08:11:36.193720   61464 logs.go:123] Gathering logs for kube-scheduler [8addb762fc7555621576c18cbc1c4ef7145c6cd83370f8f1c69431367d135d30] ...
	I0915 08:11:36.193750   61464 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8addb762fc7555621576c18cbc1c4ef7145c6cd83370f8f1c69431367d135d30"
	I0915 08:11:38.740485   61464 system_pods.go:59] 8 kube-system pods found
	I0915 08:11:38.740516   61464 system_pods.go:61] "coredns-7c65d6cfc9-np76n" [a54ae610-21a2-491a-84b7-13fdd31ad5a2] Running
	I0915 08:11:38.740522   61464 system_pods.go:61] "etcd-embed-certs-474196" [dd0695b8-d16b-4f34-adae-3c284f2ea135] Running
	I0915 08:11:38.740526   61464 system_pods.go:61] "kube-apiserver-embed-certs-474196" [319b041a-0bde-442e-8726-10164c01f732] Running
	I0915 08:11:38.740529   61464 system_pods.go:61] "kube-controller-manager-embed-certs-474196" [ca3e38d2-bb63-480c-b085-a89670340402] Running
	I0915 08:11:38.740532   61464 system_pods.go:61] "kube-proxy-5tmwl" [fdcd8093-0379-45b1-b02e-a4f61444848c] Running
	I0915 08:11:38.740536   61464 system_pods.go:61] "kube-scheduler-embed-certs-474196" [04adc8e2-296f-40f9-bdbd-6e7ad416ce32] Running
	I0915 08:11:38.740541   61464 system_pods.go:61] "metrics-server-6867b74b74-mh8xh" [8e97a269-63e1-4fb0-b8b7-192535e25af0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 08:11:38.740546   61464 system_pods.go:61] "storage-provisioner" [baf93e99-ee90-4247-85c2-3ebb2324795d] Running
	I0915 08:11:38.740558   61464 system_pods.go:74] duration metric: took 3.889420486s to wait for pod list to return data ...
	I0915 08:11:38.740566   61464 default_sa.go:34] waiting for default service account to be created ...
	I0915 08:11:38.743708   61464 default_sa.go:45] found service account: "default"
	I0915 08:11:38.743734   61464 default_sa.go:55] duration metric: took 3.157418ms for default service account to be created ...
	I0915 08:11:38.743744   61464 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 08:11:38.748116   61464 system_pods.go:86] 8 kube-system pods found
	I0915 08:11:38.748141   61464 system_pods.go:89] "coredns-7c65d6cfc9-np76n" [a54ae610-21a2-491a-84b7-13fdd31ad5a2] Running
	I0915 08:11:38.748147   61464 system_pods.go:89] "etcd-embed-certs-474196" [dd0695b8-d16b-4f34-adae-3c284f2ea135] Running
	I0915 08:11:38.748151   61464 system_pods.go:89] "kube-apiserver-embed-certs-474196" [319b041a-0bde-442e-8726-10164c01f732] Running
	I0915 08:11:38.748155   61464 system_pods.go:89] "kube-controller-manager-embed-certs-474196" [ca3e38d2-bb63-480c-b085-a89670340402] Running
	I0915 08:11:38.748159   61464 system_pods.go:89] "kube-proxy-5tmwl" [fdcd8093-0379-45b1-b02e-a4f61444848c] Running
	I0915 08:11:38.748162   61464 system_pods.go:89] "kube-scheduler-embed-certs-474196" [04adc8e2-296f-40f9-bdbd-6e7ad416ce32] Running
	I0915 08:11:38.748167   61464 system_pods.go:89] "metrics-server-6867b74b74-mh8xh" [8e97a269-63e1-4fb0-b8b7-192535e25af0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 08:11:38.748172   61464 system_pods.go:89] "storage-provisioner" [baf93e99-ee90-4247-85c2-3ebb2324795d] Running
	I0915 08:11:38.748180   61464 system_pods.go:126] duration metric: took 4.429859ms to wait for k8s-apps to be running ...
	I0915 08:11:38.748190   61464 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 08:11:38.748236   61464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 08:11:38.763699   61464 system_svc.go:56] duration metric: took 15.500675ms WaitForService to wait for kubelet
	I0915 08:11:38.763725   61464 kubeadm.go:582] duration metric: took 4m23.579888159s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 08:11:38.763749   61464 node_conditions.go:102] verifying NodePressure condition ...
	I0915 08:11:38.767212   61464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 08:11:38.767239   61464 node_conditions.go:123] node cpu capacity is 2
	I0915 08:11:38.767268   61464 node_conditions.go:105] duration metric: took 3.513343ms to run NodePressure ...
	I0915 08:11:38.767283   61464 start.go:241] waiting for startup goroutines ...
	I0915 08:11:38.767293   61464 start.go:246] waiting for cluster config update ...
	I0915 08:11:38.767311   61464 start.go:255] writing updated cluster config ...
	I0915 08:11:38.767699   61464 ssh_runner.go:195] Run: rm -f paused
	I0915 08:11:38.815645   61464 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 08:11:38.818772   61464 out.go:177] * Done! kubectl is now configured to use "embed-certs-474196" cluster and "default" namespace by default
	I0915 08:11:38.718036   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:40.718713   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:42.718943   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:45.218084   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:47.219624   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:49.718518   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:51.719482   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:54.220128   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:56.718669   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:11:58.719125   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:12:01.219568   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:12:03.718357   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:12:06.219383   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:12:08.718690   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:12:10.720582   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:12:13.219243   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:12:15.219968   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:12:17.719561   61251 pod_ready.go:103] pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace has status "Ready":"False"
	I0915 08:12:15.501592   61935 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0915 08:12:15.502255   61935 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 08:12:15.502475   61935 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 08:12:18.219331   61251 pod_ready.go:82] duration metric: took 4m0.007254665s for pod "metrics-server-6867b74b74-d5nzc" in "kube-system" namespace to be "Ready" ...
	E0915 08:12:18.219355   61251 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0915 08:12:18.219366   61251 pod_ready.go:39] duration metric: took 4m3.607507058s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 08:12:18.219384   61251 api_server.go:52] waiting for apiserver process to appear ...
	I0915 08:12:18.219419   61251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:12:18.219478   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:12:18.266049   61251 cri.go:89] found id: "f730a7be0c4528b2982c5b91760dcee00022f5ab5a248fcc73ad9f49830d3c65"
	I0915 08:12:18.266070   61251 cri.go:89] found id: ""
	I0915 08:12:18.266089   61251 logs.go:276] 1 containers: [f730a7be0c4528b2982c5b91760dcee00022f5ab5a248fcc73ad9f49830d3c65]
	I0915 08:12:18.266153   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:18.270675   61251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:12:18.270733   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:12:18.308014   61251 cri.go:89] found id: "981f9165f28bd60ab1836edfd8041dba665c7bc05c3a364be17ba8964d282a57"
	I0915 08:12:18.308036   61251 cri.go:89] found id: ""
	I0915 08:12:18.308044   61251 logs.go:276] 1 containers: [981f9165f28bd60ab1836edfd8041dba665c7bc05c3a364be17ba8964d282a57]
	I0915 08:12:18.308100   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:18.312341   61251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:12:18.312400   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:12:18.354376   61251 cri.go:89] found id: "be8f5cfe7d2d946cf5c5622e9c94e7bcb9b2ec4d0e95180a114f9e82755bce01"
	I0915 08:12:18.354410   61251 cri.go:89] found id: ""
	I0915 08:12:18.354420   61251 logs.go:276] 1 containers: [be8f5cfe7d2d946cf5c5622e9c94e7bcb9b2ec4d0e95180a114f9e82755bce01]
	I0915 08:12:18.354470   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:18.358419   61251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:12:18.358491   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:12:18.398168   61251 cri.go:89] found id: "d1e61b69e2d01d01ce165b5436224668f268e4dee3d7bdef43c063ab557d8857"
	I0915 08:12:18.398197   61251 cri.go:89] found id: ""
	I0915 08:12:18.398206   61251 logs.go:276] 1 containers: [d1e61b69e2d01d01ce165b5436224668f268e4dee3d7bdef43c063ab557d8857]
	I0915 08:12:18.398267   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:18.402301   61251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:12:18.402362   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:12:18.453372   61251 cri.go:89] found id: "93013faea664e4e8db8e1bb2537dea8af928c5d01d7a9d6959253e696fa98c93"
	I0915 08:12:18.453404   61251 cri.go:89] found id: ""
	I0915 08:12:18.453414   61251 logs.go:276] 1 containers: [93013faea664e4e8db8e1bb2537dea8af928c5d01d7a9d6959253e696fa98c93]
	I0915 08:12:18.453472   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:18.457325   61251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:12:18.457374   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:12:18.499644   61251 cri.go:89] found id: "3d8eacf21d23b53d85c171af02e519051aba68259292a095054d155f852e084b"
	I0915 08:12:18.499663   61251 cri.go:89] found id: ""
	I0915 08:12:18.499671   61251 logs.go:276] 1 containers: [3d8eacf21d23b53d85c171af02e519051aba68259292a095054d155f852e084b]
	I0915 08:12:18.499725   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:18.504587   61251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:12:18.504648   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:12:18.543442   61251 cri.go:89] found id: ""
	I0915 08:12:18.543465   61251 logs.go:276] 0 containers: []
	W0915 08:12:18.543473   61251 logs.go:278] No container was found matching "kindnet"
	I0915 08:12:18.543479   61251 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:12:18.543525   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:12:18.579091   61251 cri.go:89] found id: "85b843cf3e13dee8d0d6bf561abefa1fd9560b67eca302edd269e8daae2baecf"
	I0915 08:12:18.579114   61251 cri.go:89] found id: "8fce3a8e2ecaf829be923c64f5d59cb45f9ccc6d8d2a3bb93f2e1b9e46a4e3b8"
	I0915 08:12:18.579118   61251 cri.go:89] found id: ""
	I0915 08:12:18.579125   61251 logs.go:276] 2 containers: [85b843cf3e13dee8d0d6bf561abefa1fd9560b67eca302edd269e8daae2baecf 8fce3a8e2ecaf829be923c64f5d59cb45f9ccc6d8d2a3bb93f2e1b9e46a4e3b8]
	I0915 08:12:18.579177   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:18.588331   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:18.592610   61251 logs.go:123] Gathering logs for kube-scheduler [d1e61b69e2d01d01ce165b5436224668f268e4dee3d7bdef43c063ab557d8857] ...
	I0915 08:12:18.592633   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1e61b69e2d01d01ce165b5436224668f268e4dee3d7bdef43c063ab557d8857"
	I0915 08:12:18.634922   61251 logs.go:123] Gathering logs for kube-proxy [93013faea664e4e8db8e1bb2537dea8af928c5d01d7a9d6959253e696fa98c93] ...
	I0915 08:12:18.634950   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93013faea664e4e8db8e1bb2537dea8af928c5d01d7a9d6959253e696fa98c93"
	I0915 08:12:18.675331   61251 logs.go:123] Gathering logs for kube-controller-manager [3d8eacf21d23b53d85c171af02e519051aba68259292a095054d155f852e084b] ...
	I0915 08:12:18.675361   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d8eacf21d23b53d85c171af02e519051aba68259292a095054d155f852e084b"
	I0915 08:12:18.731082   61251 logs.go:123] Gathering logs for storage-provisioner [85b843cf3e13dee8d0d6bf561abefa1fd9560b67eca302edd269e8daae2baecf] ...
	I0915 08:12:18.731116   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85b843cf3e13dee8d0d6bf561abefa1fd9560b67eca302edd269e8daae2baecf"
	I0915 08:12:18.767238   61251 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:12:18.767266   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:12:19.242515   61251 logs.go:123] Gathering logs for kubelet ...
	I0915 08:12:19.242560   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:12:19.309399   61251 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:12:19.309445   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 08:12:19.439866   61251 logs.go:123] Gathering logs for coredns [be8f5cfe7d2d946cf5c5622e9c94e7bcb9b2ec4d0e95180a114f9e82755bce01] ...
	I0915 08:12:19.439898   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be8f5cfe7d2d946cf5c5622e9c94e7bcb9b2ec4d0e95180a114f9e82755bce01"
	I0915 08:12:19.476616   61251 logs.go:123] Gathering logs for container status ...
	I0915 08:12:19.476646   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:12:19.520895   61251 logs.go:123] Gathering logs for storage-provisioner [8fce3a8e2ecaf829be923c64f5d59cb45f9ccc6d8d2a3bb93f2e1b9e46a4e3b8] ...
	I0915 08:12:19.520925   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fce3a8e2ecaf829be923c64f5d59cb45f9ccc6d8d2a3bb93f2e1b9e46a4e3b8"
	I0915 08:12:19.556529   61251 logs.go:123] Gathering logs for dmesg ...
	I0915 08:12:19.556561   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:12:19.572080   61251 logs.go:123] Gathering logs for kube-apiserver [f730a7be0c4528b2982c5b91760dcee00022f5ab5a248fcc73ad9f49830d3c65] ...
	I0915 08:12:19.572109   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f730a7be0c4528b2982c5b91760dcee00022f5ab5a248fcc73ad9f49830d3c65"
	I0915 08:12:19.633381   61251 logs.go:123] Gathering logs for etcd [981f9165f28bd60ab1836edfd8041dba665c7bc05c3a364be17ba8964d282a57] ...
	I0915 08:12:19.633417   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 981f9165f28bd60ab1836edfd8041dba665c7bc05c3a364be17ba8964d282a57"
	I0915 08:12:22.177666   61251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 08:12:22.194381   61251 api_server.go:72] duration metric: took 4m15.356352087s to wait for apiserver process to appear ...
	I0915 08:12:22.194406   61251 api_server.go:88] waiting for apiserver healthz status ...
	I0915 08:12:22.194446   61251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:12:22.194497   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:12:22.231320   61251 cri.go:89] found id: "f730a7be0c4528b2982c5b91760dcee00022f5ab5a248fcc73ad9f49830d3c65"
	I0915 08:12:22.231349   61251 cri.go:89] found id: ""
	I0915 08:12:22.231360   61251 logs.go:276] 1 containers: [f730a7be0c4528b2982c5b91760dcee00022f5ab5a248fcc73ad9f49830d3c65]
	I0915 08:12:22.231420   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:22.235554   61251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:12:22.235626   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:12:22.272467   61251 cri.go:89] found id: "981f9165f28bd60ab1836edfd8041dba665c7bc05c3a364be17ba8964d282a57"
	I0915 08:12:22.272488   61251 cri.go:89] found id: ""
	I0915 08:12:22.272496   61251 logs.go:276] 1 containers: [981f9165f28bd60ab1836edfd8041dba665c7bc05c3a364be17ba8964d282a57]
	I0915 08:12:22.272541   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:22.276606   61251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:12:22.276668   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:12:22.311190   61251 cri.go:89] found id: "be8f5cfe7d2d946cf5c5622e9c94e7bcb9b2ec4d0e95180a114f9e82755bce01"
	I0915 08:12:22.311215   61251 cri.go:89] found id: ""
	I0915 08:12:22.311223   61251 logs.go:276] 1 containers: [be8f5cfe7d2d946cf5c5622e9c94e7bcb9b2ec4d0e95180a114f9e82755bce01]
	I0915 08:12:22.311267   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:22.315220   61251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:12:22.315287   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:12:22.351494   61251 cri.go:89] found id: "d1e61b69e2d01d01ce165b5436224668f268e4dee3d7bdef43c063ab557d8857"
	I0915 08:12:22.351522   61251 cri.go:89] found id: ""
	I0915 08:12:22.351532   61251 logs.go:276] 1 containers: [d1e61b69e2d01d01ce165b5436224668f268e4dee3d7bdef43c063ab557d8857]
	I0915 08:12:22.351589   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:22.355839   61251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:12:22.355894   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:12:22.391284   61251 cri.go:89] found id: "93013faea664e4e8db8e1bb2537dea8af928c5d01d7a9d6959253e696fa98c93"
	I0915 08:12:22.391307   61251 cri.go:89] found id: ""
	I0915 08:12:22.391316   61251 logs.go:276] 1 containers: [93013faea664e4e8db8e1bb2537dea8af928c5d01d7a9d6959253e696fa98c93]
	I0915 08:12:22.391372   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:22.395258   61251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:12:22.395308   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:12:22.431616   61251 cri.go:89] found id: "3d8eacf21d23b53d85c171af02e519051aba68259292a095054d155f852e084b"
	I0915 08:12:22.431641   61251 cri.go:89] found id: ""
	I0915 08:12:22.431650   61251 logs.go:276] 1 containers: [3d8eacf21d23b53d85c171af02e519051aba68259292a095054d155f852e084b]
	I0915 08:12:22.431704   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:22.435661   61251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:12:22.435723   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:12:22.479921   61251 cri.go:89] found id: ""
	I0915 08:12:22.479945   61251 logs.go:276] 0 containers: []
	W0915 08:12:22.479952   61251 logs.go:278] No container was found matching "kindnet"
	I0915 08:12:22.479958   61251 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:12:22.480005   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:12:22.521956   61251 cri.go:89] found id: "85b843cf3e13dee8d0d6bf561abefa1fd9560b67eca302edd269e8daae2baecf"
	I0915 08:12:22.521983   61251 cri.go:89] found id: "8fce3a8e2ecaf829be923c64f5d59cb45f9ccc6d8d2a3bb93f2e1b9e46a4e3b8"
	I0915 08:12:22.521989   61251 cri.go:89] found id: ""
	I0915 08:12:22.521997   61251 logs.go:276] 2 containers: [85b843cf3e13dee8d0d6bf561abefa1fd9560b67eca302edd269e8daae2baecf 8fce3a8e2ecaf829be923c64f5d59cb45f9ccc6d8d2a3bb93f2e1b9e46a4e3b8]
	I0915 08:12:22.522050   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:22.526330   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:22.530094   61251 logs.go:123] Gathering logs for storage-provisioner [85b843cf3e13dee8d0d6bf561abefa1fd9560b67eca302edd269e8daae2baecf] ...
	I0915 08:12:22.530111   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85b843cf3e13dee8d0d6bf561abefa1fd9560b67eca302edd269e8daae2baecf"
	I0915 08:12:22.565801   61251 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:12:22.565839   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:12:20.502981   61935 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 08:12:20.503243   61935 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 08:12:22.983179   61251 logs.go:123] Gathering logs for etcd [981f9165f28bd60ab1836edfd8041dba665c7bc05c3a364be17ba8964d282a57] ...
	I0915 08:12:22.983219   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 981f9165f28bd60ab1836edfd8041dba665c7bc05c3a364be17ba8964d282a57"
	I0915 08:12:23.031115   61251 logs.go:123] Gathering logs for dmesg ...
	I0915 08:12:23.031146   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:12:23.045247   61251 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:12:23.045278   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 08:12:23.155242   61251 logs.go:123] Gathering logs for kube-apiserver [f730a7be0c4528b2982c5b91760dcee00022f5ab5a248fcc73ad9f49830d3c65] ...
	I0915 08:12:23.155273   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f730a7be0c4528b2982c5b91760dcee00022f5ab5a248fcc73ad9f49830d3c65"
	I0915 08:12:23.209162   61251 logs.go:123] Gathering logs for coredns [be8f5cfe7d2d946cf5c5622e9c94e7bcb9b2ec4d0e95180a114f9e82755bce01] ...
	I0915 08:12:23.209193   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be8f5cfe7d2d946cf5c5622e9c94e7bcb9b2ec4d0e95180a114f9e82755bce01"
	I0915 08:12:23.245680   61251 logs.go:123] Gathering logs for kube-scheduler [d1e61b69e2d01d01ce165b5436224668f268e4dee3d7bdef43c063ab557d8857] ...
	I0915 08:12:23.245707   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1e61b69e2d01d01ce165b5436224668f268e4dee3d7bdef43c063ab557d8857"
	I0915 08:12:23.287224   61251 logs.go:123] Gathering logs for kube-proxy [93013faea664e4e8db8e1bb2537dea8af928c5d01d7a9d6959253e696fa98c93] ...
	I0915 08:12:23.287259   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93013faea664e4e8db8e1bb2537dea8af928c5d01d7a9d6959253e696fa98c93"
	I0915 08:12:23.332173   61251 logs.go:123] Gathering logs for kube-controller-manager [3d8eacf21d23b53d85c171af02e519051aba68259292a095054d155f852e084b] ...
	I0915 08:12:23.332200   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d8eacf21d23b53d85c171af02e519051aba68259292a095054d155f852e084b"
	I0915 08:12:23.394892   61251 logs.go:123] Gathering logs for kubelet ...
	I0915 08:12:23.394924   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:12:23.461206   61251 logs.go:123] Gathering logs for container status ...
	I0915 08:12:23.461238   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:12:23.508778   61251 logs.go:123] Gathering logs for storage-provisioner [8fce3a8e2ecaf829be923c64f5d59cb45f9ccc6d8d2a3bb93f2e1b9e46a4e3b8] ...
	I0915 08:12:23.508806   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fce3a8e2ecaf829be923c64f5d59cb45f9ccc6d8d2a3bb93f2e1b9e46a4e3b8"
	I0915 08:12:26.046457   61251 api_server.go:253] Checking apiserver healthz at https://192.168.61.247:8443/healthz ...
	I0915 08:12:26.051170   61251 api_server.go:279] https://192.168.61.247:8443/healthz returned 200:
	ok
	I0915 08:12:26.052350   61251 api_server.go:141] control plane version: v1.31.1
	I0915 08:12:26.052372   61251 api_server.go:131] duration metric: took 3.857959369s to wait for apiserver health ...
	I0915 08:12:26.052381   61251 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 08:12:26.052411   61251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:12:26.052474   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:12:26.089759   61251 cri.go:89] found id: "f730a7be0c4528b2982c5b91760dcee00022f5ab5a248fcc73ad9f49830d3c65"
	I0915 08:12:26.089785   61251 cri.go:89] found id: ""
	I0915 08:12:26.089794   61251 logs.go:276] 1 containers: [f730a7be0c4528b2982c5b91760dcee00022f5ab5a248fcc73ad9f49830d3c65]
	I0915 08:12:26.089863   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:26.094076   61251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:12:26.094130   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:12:26.129421   61251 cri.go:89] found id: "981f9165f28bd60ab1836edfd8041dba665c7bc05c3a364be17ba8964d282a57"
	I0915 08:12:26.129448   61251 cri.go:89] found id: ""
	I0915 08:12:26.129456   61251 logs.go:276] 1 containers: [981f9165f28bd60ab1836edfd8041dba665c7bc05c3a364be17ba8964d282a57]
	I0915 08:12:26.129512   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:26.133637   61251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:12:26.133689   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:12:26.171573   61251 cri.go:89] found id: "be8f5cfe7d2d946cf5c5622e9c94e7bcb9b2ec4d0e95180a114f9e82755bce01"
	I0915 08:12:26.171604   61251 cri.go:89] found id: ""
	I0915 08:12:26.171615   61251 logs.go:276] 1 containers: [be8f5cfe7d2d946cf5c5622e9c94e7bcb9b2ec4d0e95180a114f9e82755bce01]
	I0915 08:12:26.171674   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:26.176092   61251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:12:26.176167   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:12:26.213369   61251 cri.go:89] found id: "d1e61b69e2d01d01ce165b5436224668f268e4dee3d7bdef43c063ab557d8857"
	I0915 08:12:26.213391   61251 cri.go:89] found id: ""
	I0915 08:12:26.213400   61251 logs.go:276] 1 containers: [d1e61b69e2d01d01ce165b5436224668f268e4dee3d7bdef43c063ab557d8857]
	I0915 08:12:26.213472   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:26.218389   61251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:12:26.218446   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:12:26.252103   61251 cri.go:89] found id: "93013faea664e4e8db8e1bb2537dea8af928c5d01d7a9d6959253e696fa98c93"
	I0915 08:12:26.252124   61251 cri.go:89] found id: ""
	I0915 08:12:26.252134   61251 logs.go:276] 1 containers: [93013faea664e4e8db8e1bb2537dea8af928c5d01d7a9d6959253e696fa98c93]
	I0915 08:12:26.252189   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:26.256206   61251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:12:26.256271   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:12:26.295465   61251 cri.go:89] found id: "3d8eacf21d23b53d85c171af02e519051aba68259292a095054d155f852e084b"
	I0915 08:12:26.295491   61251 cri.go:89] found id: ""
	I0915 08:12:26.295499   61251 logs.go:276] 1 containers: [3d8eacf21d23b53d85c171af02e519051aba68259292a095054d155f852e084b]
	I0915 08:12:26.295547   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:26.299480   61251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:12:26.299546   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:12:26.339452   61251 cri.go:89] found id: ""
	I0915 08:12:26.339477   61251 logs.go:276] 0 containers: []
	W0915 08:12:26.339488   61251 logs.go:278] No container was found matching "kindnet"
	I0915 08:12:26.339495   61251 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:12:26.339559   61251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:12:26.372789   61251 cri.go:89] found id: "85b843cf3e13dee8d0d6bf561abefa1fd9560b67eca302edd269e8daae2baecf"
	I0915 08:12:26.372813   61251 cri.go:89] found id: "8fce3a8e2ecaf829be923c64f5d59cb45f9ccc6d8d2a3bb93f2e1b9e46a4e3b8"
	I0915 08:12:26.372817   61251 cri.go:89] found id: ""
	I0915 08:12:26.372824   61251 logs.go:276] 2 containers: [85b843cf3e13dee8d0d6bf561abefa1fd9560b67eca302edd269e8daae2baecf 8fce3a8e2ecaf829be923c64f5d59cb45f9ccc6d8d2a3bb93f2e1b9e46a4e3b8]
	I0915 08:12:26.372881   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:26.377148   61251 ssh_runner.go:195] Run: which crictl
	I0915 08:12:26.381227   61251 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:12:26.381251   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 08:12:26.484163   61251 logs.go:123] Gathering logs for kube-apiserver [f730a7be0c4528b2982c5b91760dcee00022f5ab5a248fcc73ad9f49830d3c65] ...
	I0915 08:12:26.484196   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f730a7be0c4528b2982c5b91760dcee00022f5ab5a248fcc73ad9f49830d3c65"
	I0915 08:12:26.536934   61251 logs.go:123] Gathering logs for etcd [981f9165f28bd60ab1836edfd8041dba665c7bc05c3a364be17ba8964d282a57] ...
	I0915 08:12:26.536963   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 981f9165f28bd60ab1836edfd8041dba665c7bc05c3a364be17ba8964d282a57"
	I0915 08:12:26.581512   61251 logs.go:123] Gathering logs for coredns [be8f5cfe7d2d946cf5c5622e9c94e7bcb9b2ec4d0e95180a114f9e82755bce01] ...
	I0915 08:12:26.581543   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be8f5cfe7d2d946cf5c5622e9c94e7bcb9b2ec4d0e95180a114f9e82755bce01"
	I0915 08:12:26.622869   61251 logs.go:123] Gathering logs for kube-scheduler [d1e61b69e2d01d01ce165b5436224668f268e4dee3d7bdef43c063ab557d8857] ...
	I0915 08:12:26.622899   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1e61b69e2d01d01ce165b5436224668f268e4dee3d7bdef43c063ab557d8857"
	I0915 08:12:26.661284   61251 logs.go:123] Gathering logs for kube-proxy [93013faea664e4e8db8e1bb2537dea8af928c5d01d7a9d6959253e696fa98c93] ...
	I0915 08:12:26.661310   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93013faea664e4e8db8e1bb2537dea8af928c5d01d7a9d6959253e696fa98c93"
	I0915 08:12:26.699834   61251 logs.go:123] Gathering logs for storage-provisioner [8fce3a8e2ecaf829be923c64f5d59cb45f9ccc6d8d2a3bb93f2e1b9e46a4e3b8] ...
	I0915 08:12:26.699859   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fce3a8e2ecaf829be923c64f5d59cb45f9ccc6d8d2a3bb93f2e1b9e46a4e3b8"
	I0915 08:12:26.737892   61251 logs.go:123] Gathering logs for container status ...
	I0915 08:12:26.737918   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:12:26.779515   61251 logs.go:123] Gathering logs for kubelet ...
	I0915 08:12:26.779552   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:12:26.847730   61251 logs.go:123] Gathering logs for dmesg ...
	I0915 08:12:26.847765   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:12:26.861895   61251 logs.go:123] Gathering logs for kube-controller-manager [3d8eacf21d23b53d85c171af02e519051aba68259292a095054d155f852e084b] ...
	I0915 08:12:26.861921   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d8eacf21d23b53d85c171af02e519051aba68259292a095054d155f852e084b"
	I0915 08:12:26.920820   61251 logs.go:123] Gathering logs for storage-provisioner [85b843cf3e13dee8d0d6bf561abefa1fd9560b67eca302edd269e8daae2baecf] ...
	I0915 08:12:26.920853   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85b843cf3e13dee8d0d6bf561abefa1fd9560b67eca302edd269e8daae2baecf"
	I0915 08:12:26.958335   61251 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:12:26.958365   61251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:12:29.830410   61251 system_pods.go:59] 8 kube-system pods found
	I0915 08:12:29.830439   61251 system_pods.go:61] "coredns-7c65d6cfc9-xbvrd" [271712fa-0fe3-44f3-898f-12e5a30d3a79] Running
	I0915 08:12:29.830444   61251 system_pods.go:61] "etcd-no-preload-778087" [4efdc0ef-ba7b-4090-82b7-8d2cb35aab39] Running
	I0915 08:12:29.830448   61251 system_pods.go:61] "kube-apiserver-no-preload-778087" [d06944b2-19bf-4d6a-b862-69e28d8d3991] Running
	I0915 08:12:29.830452   61251 system_pods.go:61] "kube-controller-manager-no-preload-778087" [59bfb273-2f4f-4cf1-ae8e-6398c92b6d81] Running
	I0915 08:12:29.830455   61251 system_pods.go:61] "kube-proxy-2qg9r" [c34dcf5b-b172-4c9a-b7b5-6fb43564df4a] Running
	I0915 08:12:29.830458   61251 system_pods.go:61] "kube-scheduler-no-preload-778087" [1978dd3a-2bae-45dd-8e81-acb164693b70] Running
	I0915 08:12:29.830465   61251 system_pods.go:61] "metrics-server-6867b74b74-d5nzc" [4ce62161-4931-423a-9d68-c17512ec80ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 08:12:29.830469   61251 system_pods.go:61] "storage-provisioner" [22a8e26f-7033-49e1-8e14-8d4bd03822d3] Running
	I0915 08:12:29.830477   61251 system_pods.go:74] duration metric: took 3.778089294s to wait for pod list to return data ...
	I0915 08:12:29.830484   61251 default_sa.go:34] waiting for default service account to be created ...
	I0915 08:12:29.833406   61251 default_sa.go:45] found service account: "default"
	I0915 08:12:29.833432   61251 default_sa.go:55] duration metric: took 2.94232ms for default service account to be created ...
	I0915 08:12:29.833438   61251 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 08:12:29.838522   61251 system_pods.go:86] 8 kube-system pods found
	I0915 08:12:29.838550   61251 system_pods.go:89] "coredns-7c65d6cfc9-xbvrd" [271712fa-0fe3-44f3-898f-12e5a30d3a79] Running
	I0915 08:12:29.838556   61251 system_pods.go:89] "etcd-no-preload-778087" [4efdc0ef-ba7b-4090-82b7-8d2cb35aab39] Running
	I0915 08:12:29.838560   61251 system_pods.go:89] "kube-apiserver-no-preload-778087" [d06944b2-19bf-4d6a-b862-69e28d8d3991] Running
	I0915 08:12:29.838565   61251 system_pods.go:89] "kube-controller-manager-no-preload-778087" [59bfb273-2f4f-4cf1-ae8e-6398c92b6d81] Running
	I0915 08:12:29.838569   61251 system_pods.go:89] "kube-proxy-2qg9r" [c34dcf5b-b172-4c9a-b7b5-6fb43564df4a] Running
	I0915 08:12:29.838572   61251 system_pods.go:89] "kube-scheduler-no-preload-778087" [1978dd3a-2bae-45dd-8e81-acb164693b70] Running
	I0915 08:12:29.838579   61251 system_pods.go:89] "metrics-server-6867b74b74-d5nzc" [4ce62161-4931-423a-9d68-c17512ec80ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 08:12:29.838583   61251 system_pods.go:89] "storage-provisioner" [22a8e26f-7033-49e1-8e14-8d4bd03822d3] Running
	I0915 08:12:29.838589   61251 system_pods.go:126] duration metric: took 5.146384ms to wait for k8s-apps to be running ...
	I0915 08:12:29.838599   61251 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 08:12:29.838639   61251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 08:12:29.854679   61251 system_svc.go:56] duration metric: took 16.070232ms WaitForService to wait for kubelet
	I0915 08:12:29.854712   61251 kubeadm.go:582] duration metric: took 4m23.016688808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 08:12:29.854736   61251 node_conditions.go:102] verifying NodePressure condition ...
	I0915 08:12:29.858986   61251 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 08:12:29.859012   61251 node_conditions.go:123] node cpu capacity is 2
	I0915 08:12:29.859075   61251 node_conditions.go:105] duration metric: took 4.332511ms to run NodePressure ...
	I0915 08:12:29.859089   61251 start.go:241] waiting for startup goroutines ...
	I0915 08:12:29.859102   61251 start.go:246] waiting for cluster config update ...
	I0915 08:12:29.859118   61251 start.go:255] writing updated cluster config ...
	I0915 08:12:29.859398   61251 ssh_runner.go:195] Run: rm -f paused
	I0915 08:12:29.908665   61251 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 08:12:29.911578   61251 out.go:177] * Done! kubectl is now configured to use "no-preload-778087" cluster and "default" namespace by default
	I0915 08:12:30.503816   61935 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 08:12:30.504044   61935 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 08:12:50.504815   61935 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 08:12:50.504999   61935 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 08:13:12.512842   60028 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0915 08:13:12.512958   60028 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0915 08:13:12.513552   60028 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 08:13:12.513660   60028 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 08:13:12.513766   60028 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 08:13:12.513932   60028 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 08:13:12.514075   60028 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 08:13:12.514144   60028 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 08:13:12.516050   60028 out.go:235]   - Generating certificates and keys ...
	I0915 08:13:12.516133   60028 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 08:13:12.516229   60028 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 08:13:12.516325   60028 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0915 08:13:12.516418   60028 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0915 08:13:12.516487   60028 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0915 08:13:12.516534   60028 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0915 08:13:12.516609   60028 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0915 08:13:12.516704   60028 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0915 08:13:12.516782   60028 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0915 08:13:12.516853   60028 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0915 08:13:12.516887   60028 kubeadm.go:310] [certs] Using the existing "sa" key
	I0915 08:13:12.516956   60028 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 08:13:12.517031   60028 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 08:13:12.517114   60028 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 08:13:12.517184   60028 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 08:13:12.517278   60028 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 08:13:12.517355   60028 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 08:13:12.517432   60028 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 08:13:12.517491   60028 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 08:13:12.519208   60028 out.go:235]   - Booting up control plane ...
	I0915 08:13:12.519309   60028 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 08:13:12.519398   60028 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 08:13:12.519470   60028 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 08:13:12.519564   60028 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 08:13:12.519666   60028 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 08:13:12.519703   60028 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 08:13:12.519827   60028 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 08:13:12.519946   60028 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 08:13:12.520023   60028 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003638358s
	I0915 08:13:12.520109   60028 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 08:13:12.520172   60028 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000328562s
	I0915 08:13:12.520181   60028 kubeadm.go:310] 
	I0915 08:13:12.520232   60028 kubeadm.go:310] Unfortunately, an error has occurred:
	I0915 08:13:12.520277   60028 kubeadm.go:310] 	context deadline exceeded
	I0915 08:13:12.520287   60028 kubeadm.go:310] 
	I0915 08:13:12.520321   60028 kubeadm.go:310] This error is likely caused by:
	I0915 08:13:12.520357   60028 kubeadm.go:310] 	- The kubelet is not running
	I0915 08:13:12.520478   60028 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0915 08:13:12.520492   60028 kubeadm.go:310] 
	I0915 08:13:12.520617   60028 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0915 08:13:12.520646   60028 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0915 08:13:12.520684   60028 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0915 08:13:12.520694   60028 kubeadm.go:310] 
	I0915 08:13:12.520818   60028 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0915 08:13:12.520896   60028 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0915 08:13:12.520974   60028 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0915 08:13:12.521054   60028 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0915 08:13:12.521124   60028 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0915 08:13:12.521236   60028 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0915 08:13:12.521253   60028 kubeadm.go:394] duration metric: took 12m10.604564541s to StartCluster
	I0915 08:13:12.521283   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 08:13:12.521332   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 08:13:12.565045   60028 cri.go:89] found id: ""
	I0915 08:13:12.565077   60028 logs.go:276] 0 containers: []
	W0915 08:13:12.565089   60028 logs.go:278] No container was found matching "kube-apiserver"
	I0915 08:13:12.565097   60028 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 08:13:12.565156   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 08:13:12.602292   60028 cri.go:89] found id: "ed52de99cc7342428987a83b448dd0c6c49448eba5681162d005f5ded69a3877"
	I0915 08:13:12.602324   60028 cri.go:89] found id: ""
	I0915 08:13:12.602335   60028 logs.go:276] 1 containers: [ed52de99cc7342428987a83b448dd0c6c49448eba5681162d005f5ded69a3877]
	I0915 08:13:12.602391   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:13:12.606895   60028 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 08:13:12.606967   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 08:13:12.644288   60028 cri.go:89] found id: ""
	I0915 08:13:12.644314   60028 logs.go:276] 0 containers: []
	W0915 08:13:12.644322   60028 logs.go:278] No container was found matching "coredns"
	I0915 08:13:12.644327   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 08:13:12.644376   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 08:13:12.678004   60028 cri.go:89] found id: "14fd23d4076c0ffb5f47f4760ae767ddcd1b172a99ab306047a7ab82a9d02301"
	I0915 08:13:12.678030   60028 cri.go:89] found id: ""
	I0915 08:13:12.678039   60028 logs.go:276] 1 containers: [14fd23d4076c0ffb5f47f4760ae767ddcd1b172a99ab306047a7ab82a9d02301]
	I0915 08:13:12.678093   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:13:12.682424   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 08:13:12.682481   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 08:13:12.721315   60028 cri.go:89] found id: ""
	I0915 08:13:12.721344   60028 logs.go:276] 0 containers: []
	W0915 08:13:12.721354   60028 logs.go:278] No container was found matching "kube-proxy"
	I0915 08:13:12.721361   60028 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 08:13:12.721428   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 08:13:12.759642   60028 cri.go:89] found id: "713128e58445eb1b2cbaecd3f8cd6199c3d67f3b2ce3c8d18658060782ea723d"
	I0915 08:13:12.759670   60028 cri.go:89] found id: ""
	I0915 08:13:12.759679   60028 logs.go:276] 1 containers: [713128e58445eb1b2cbaecd3f8cd6199c3d67f3b2ce3c8d18658060782ea723d]
	I0915 08:13:12.759738   60028 ssh_runner.go:195] Run: which crictl
	I0915 08:13:12.764007   60028 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 08:13:12.764074   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 08:13:12.803027   60028 cri.go:89] found id: ""
	I0915 08:13:12.803051   60028 logs.go:276] 0 containers: []
	W0915 08:13:12.803059   60028 logs.go:278] No container was found matching "kindnet"
	I0915 08:13:12.803065   60028 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0915 08:13:12.803120   60028 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0915 08:13:12.843987   60028 cri.go:89] found id: ""
	I0915 08:13:12.844011   60028 logs.go:276] 0 containers: []
	W0915 08:13:12.844018   60028 logs.go:278] No container was found matching "storage-provisioner"
	I0915 08:13:12.844033   60028 logs.go:123] Gathering logs for CRI-O ...
	I0915 08:13:12.844044   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 08:13:13.080292   60028 logs.go:123] Gathering logs for container status ...
	I0915 08:13:13.080326   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 08:13:13.120978   60028 logs.go:123] Gathering logs for kubelet ...
	I0915 08:13:13.121004   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 08:13:13.288678   60028 logs.go:123] Gathering logs for dmesg ...
	I0915 08:13:13.288711   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 08:13:13.303887   60028 logs.go:123] Gathering logs for describe nodes ...
	I0915 08:13:13.303915   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0915 08:13:13.389078   60028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0915 08:13:13.389100   60028 logs.go:123] Gathering logs for etcd [ed52de99cc7342428987a83b448dd0c6c49448eba5681162d005f5ded69a3877] ...
	I0915 08:13:13.389111   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed52de99cc7342428987a83b448dd0c6c49448eba5681162d005f5ded69a3877"
	I0915 08:13:13.434769   60028 logs.go:123] Gathering logs for kube-scheduler [14fd23d4076c0ffb5f47f4760ae767ddcd1b172a99ab306047a7ab82a9d02301] ...
	I0915 08:13:13.434793   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14fd23d4076c0ffb5f47f4760ae767ddcd1b172a99ab306047a7ab82a9d02301"
	I0915 08:13:13.526437   60028 logs.go:123] Gathering logs for kube-controller-manager [713128e58445eb1b2cbaecd3f8cd6199c3d67f3b2ce3c8d18658060782ea723d] ...
	I0915 08:13:13.526475   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 713128e58445eb1b2cbaecd3f8cd6199c3d67f3b2ce3c8d18658060782ea723d"
	W0915 08:13:13.563112   60028 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.003638358s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000328562s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0915 08:09:10.272930   11141 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0915 08:09:10.273684   11141 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0915 08:13:13.563173   60028 out.go:270] * 
	W0915 08:13:13.563240   60028 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.003638358s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000328562s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0915 08:09:10.272930   11141 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0915 08:09:10.273684   11141 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0915 08:13:13.563264   60028 out.go:270] * 
	W0915 08:13:13.564120   60028 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 08:13:13.567102   60028 out.go:201] 
	W0915 08:13:13.568376   60028 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.003638358s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000328562s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0915 08:09:10.272930   11141 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0915 08:09:10.273684   11141 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0915 08:13:13.568425   60028 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0915 08:13:13.568445   60028 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0915 08:13:13.569984   60028 out.go:201] 
	
	
	==> CRI-O <==
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.266080511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726387995266054686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afa95443-b29a-45dd-aa5d-c02fb6481122 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.266711818Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5ad04e6-8f62-4b0f-80f5-8afc1b3cb75a name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.266766160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5ad04e6-8f62-4b0f-80f5-8afc1b3cb75a name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.266865804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:713128e58445eb1b2cbaecd3f8cd6199c3d67f3b2ce3c8d18658060782ea723d,PodSandboxId:d29b4ede889629a93da9b45ce57adbdb6156aa0b7451c358af2ffbc186303764,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726387904043802508,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-669362,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8248da4a7151635a273d5085bd0429,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.c
ontainer.restartCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed52de99cc7342428987a83b448dd0c6c49448eba5681162d005f5ded69a3877,PodSandboxId:5a9f1209e647e4005c8d601734c67a6d1f4577d86f83e20dc3d5050482ac7e68,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726387752724651224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-669362,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41b993538a067878336c5452a3db3fcd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fd23d4076c0ffb5f47f4760ae767ddcd1b172a99ab306047a7ab82a9d02301,PodSandboxId:d64890d38a0b7bd7719d2e0b6994c9950aede800692ee90cd5ef489bedf7c83a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726387752646116966,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-669362,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6b7a754015b80ceece7c4061372df84,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 4,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5ad04e6-8f62-4b0f-80f5-8afc1b3cb75a name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.300780487Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b3bc6af-ff33-4a37-a0af-1367d9ed58f1 name=/runtime.v1.RuntimeService/Version
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.300880100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b3bc6af-ff33-4a37-a0af-1367d9ed58f1 name=/runtime.v1.RuntimeService/Version
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.302004758Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f18b4a2f-8088-46da-b67d-39a1bbe290e5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.302690043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726387995302657014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f18b4a2f-8088-46da-b67d-39a1bbe290e5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.303533063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52b0feea-04cf-435d-b12e-db42cac5aebe name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.303601569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52b0feea-04cf-435d-b12e-db42cac5aebe name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.303687270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:713128e58445eb1b2cbaecd3f8cd6199c3d67f3b2ce3c8d18658060782ea723d,PodSandboxId:d29b4ede889629a93da9b45ce57adbdb6156aa0b7451c358af2ffbc186303764,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726387904043802508,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-669362,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8248da4a7151635a273d5085bd0429,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.c
ontainer.restartCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed52de99cc7342428987a83b448dd0c6c49448eba5681162d005f5ded69a3877,PodSandboxId:5a9f1209e647e4005c8d601734c67a6d1f4577d86f83e20dc3d5050482ac7e68,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726387752724651224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-669362,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41b993538a067878336c5452a3db3fcd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fd23d4076c0ffb5f47f4760ae767ddcd1b172a99ab306047a7ab82a9d02301,PodSandboxId:d64890d38a0b7bd7719d2e0b6994c9950aede800692ee90cd5ef489bedf7c83a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726387752646116966,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-669362,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6b7a754015b80ceece7c4061372df84,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 4,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52b0feea-04cf-435d-b12e-db42cac5aebe name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.345349694Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=474b5f97-0a35-4834-964e-fd95dc26a934 name=/runtime.v1.RuntimeService/Version
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.345426954Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=474b5f97-0a35-4834-964e-fd95dc26a934 name=/runtime.v1.RuntimeService/Version
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.346739098Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2906d866-fea1-46ff-9279-1df400aa8d60 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.347103112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726387995347080856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2906d866-fea1-46ff-9279-1df400aa8d60 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.347814418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3dafd17-aa82-42a5-b951-d679f0b46214 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.347863934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3dafd17-aa82-42a5-b951-d679f0b46214 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.347961863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:713128e58445eb1b2cbaecd3f8cd6199c3d67f3b2ce3c8d18658060782ea723d,PodSandboxId:d29b4ede889629a93da9b45ce57adbdb6156aa0b7451c358af2ffbc186303764,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726387904043802508,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-669362,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8248da4a7151635a273d5085bd0429,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.c
ontainer.restartCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed52de99cc7342428987a83b448dd0c6c49448eba5681162d005f5ded69a3877,PodSandboxId:5a9f1209e647e4005c8d601734c67a6d1f4577d86f83e20dc3d5050482ac7e68,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726387752724651224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-669362,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41b993538a067878336c5452a3db3fcd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fd23d4076c0ffb5f47f4760ae767ddcd1b172a99ab306047a7ab82a9d02301,PodSandboxId:d64890d38a0b7bd7719d2e0b6994c9950aede800692ee90cd5ef489bedf7c83a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726387752646116966,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-669362,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6b7a754015b80ceece7c4061372df84,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 4,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3dafd17-aa82-42a5-b951-d679f0b46214 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.384807530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b9783175-9337-41fd-9265-f43479e01f3f name=/runtime.v1.RuntimeService/Version
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.384901561Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9783175-9337-41fd-9265-f43479e01f3f name=/runtime.v1.RuntimeService/Version
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.386977186Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3d7980f-8bb2-4173-a747-f404c276d1da name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.387587209Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726387995387557145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3d7980f-8bb2-4173-a747-f404c276d1da name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.388462062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=537a40f7-93a4-43f5-b25b-cd28b441478d name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.388527879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=537a40f7-93a4-43f5-b25b-cd28b441478d name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 08:13:15 kubernetes-upgrade-669362 crio[3093]: time="2024-09-15 08:13:15.388638429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:713128e58445eb1b2cbaecd3f8cd6199c3d67f3b2ce3c8d18658060782ea723d,PodSandboxId:d29b4ede889629a93da9b45ce57adbdb6156aa0b7451c358af2ffbc186303764,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726387904043802508,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-669362,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8248da4a7151635a273d5085bd0429,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.c
ontainer.restartCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed52de99cc7342428987a83b448dd0c6c49448eba5681162d005f5ded69a3877,PodSandboxId:5a9f1209e647e4005c8d601734c67a6d1f4577d86f83e20dc3d5050482ac7e68,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726387752724651224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-669362,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41b993538a067878336c5452a3db3fcd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fd23d4076c0ffb5f47f4760ae767ddcd1b172a99ab306047a7ab82a9d02301,PodSandboxId:d64890d38a0b7bd7719d2e0b6994c9950aede800692ee90cd5ef489bedf7c83a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726387752646116966,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-669362,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6b7a754015b80ceece7c4061372df84,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 4,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=537a40f7-93a4-43f5-b25b-cd28b441478d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	713128e58445e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Exited              kube-controller-manager   16                  d29b4ede88962       kube-controller-manager-kubernetes-upgrade-669362
	ed52de99cc734       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   4 minutes ago        Running             etcd                      4                   5a9f1209e647e       etcd-kubernetes-upgrade-669362
	14fd23d4076c0       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   4 minutes ago        Running             kube-scheduler            4                   d64890d38a0b7       kube-scheduler-kubernetes-upgrade-669362
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +8.014099] systemd-fstab-generator[553]: Ignoring "noauto" option for root device
	[  +0.056073] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062766] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.190486] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.137146] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.295825] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +4.384364] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +0.058466] kauditd_printk_skb: 130 callbacks suppressed
	[Sep15 07:59] systemd-fstab-generator[833]: Ignoring "noauto" option for root device
	[ +10.486229] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[  +0.080715] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.966292] kauditd_printk_skb: 111 callbacks suppressed
	[ +11.342537] systemd-fstab-generator[2751]: Ignoring "noauto" option for root device
	[  +0.226583] systemd-fstab-generator[2785]: Ignoring "noauto" option for root device
	[  +0.317705] systemd-fstab-generator[2860]: Ignoring "noauto" option for root device
	[  +0.223220] systemd-fstab-generator[2901]: Ignoring "noauto" option for root device
	[  +0.459840] systemd-fstab-generator[2947]: Ignoring "noauto" option for root device
	[Sep15 08:01] systemd-fstab-generator[3232]: Ignoring "noauto" option for root device
	[  +0.088830] kauditd_printk_skb: 209 callbacks suppressed
	[  +1.850350] systemd-fstab-generator[3360]: Ignoring "noauto" option for root device
	[ +12.448210] kauditd_printk_skb: 79 callbacks suppressed
	[Sep15 08:05] systemd-fstab-generator[10402]: Ignoring "noauto" option for root device
	[ +12.674755] kauditd_printk_skb: 82 callbacks suppressed
	[Sep15 08:09] systemd-fstab-generator[11167]: Ignoring "noauto" option for root device
	[ +12.770753] kauditd_printk_skb: 54 callbacks suppressed
	
	
	==> etcd [ed52de99cc7342428987a83b448dd0c6c49448eba5681162d005f5ded69a3877] <==
	{"level":"info","ts":"2024-09-15T08:09:13.076512Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-15T08:09:13.076711Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"a2f3304aa289252b","initial-advertise-peer-urls":["https://192.168.83.150:2380"],"listen-peer-urls":["https://192.168.83.150:2380"],"advertise-client-urls":["https://192.168.83.150:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.150:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-15T08:09:13.076761Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-15T08:09:13.076843Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.83.150:2380"}
	{"level":"info","ts":"2024-09-15T08:09:13.076874Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.83.150:2380"}
	{"level":"info","ts":"2024-09-15T08:09:13.807600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2f3304aa289252b is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-15T08:09:13.807670Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2f3304aa289252b became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-15T08:09:13.807711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2f3304aa289252b received MsgPreVoteResp from a2f3304aa289252b at term 1"}
	{"level":"info","ts":"2024-09-15T08:09:13.807732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2f3304aa289252b became candidate at term 2"}
	{"level":"info","ts":"2024-09-15T08:09:13.807740Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2f3304aa289252b received MsgVoteResp from a2f3304aa289252b at term 2"}
	{"level":"info","ts":"2024-09-15T08:09:13.807752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2f3304aa289252b became leader at term 2"}
	{"level":"info","ts":"2024-09-15T08:09:13.807764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a2f3304aa289252b elected leader a2f3304aa289252b at term 2"}
	{"level":"info","ts":"2024-09-15T08:09:13.809323Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T08:09:13.809812Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a2f3304aa289252b","local-member-attributes":"{Name:kubernetes-upgrade-669362 ClientURLs:[https://192.168.83.150:2379]}","request-path":"/0/members/a2f3304aa289252b/attributes","cluster-id":"7fa9b793eb544566","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T08:09:13.809949Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T08:09:13.810507Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T08:09:13.810740Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7fa9b793eb544566","local-member-id":"a2f3304aa289252b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T08:09:13.810850Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T08:09:13.810898Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T08:09:13.811781Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T08:09:13.812946Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.150:2379"}
	{"level":"info","ts":"2024-09-15T08:09:13.814032Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T08:09:13.815234Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T08:09:13.816937Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T08:09:13.816973Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 08:13:15 up 14 min,  0 users,  load average: 0.04, 0.12, 0.10
	Linux kubernetes-upgrade-669362 5.10.207 #1 SMP Sun Sep 15 04:48:27 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-controller-manager [713128e58445eb1b2cbaecd3f8cd6199c3d67f3b2ce3c8d18658060782ea723d] <==
	I0915 08:11:44.700648       1 serving.go:386] Generated self-signed cert in-memory
	I0915 08:11:44.993407       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0915 08:11:44.993448       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 08:11:44.994839       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0915 08:11:44.995049       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0915 08:11:44.995055       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0915 08:11:44.995071       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0915 08:11:54.999535       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.83.150:8443/healthz\": dial tcp 192.168.83.150:8443: connect: connection refused"
	
	
	==> kube-scheduler [14fd23d4076c0ffb5f47f4760ae767ddcd1b172a99ab306047a7ab82a9d02301] <==
	E0915 08:12:40.409668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.83.150:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.150:8443: connect: connection refused" logger="UnhandledError"
	W0915 08:12:43.496895       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.83.150:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.83.150:8443: connect: connection refused
	E0915 08:12:43.496981       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.83.150:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.83.150:8443: connect: connection refused" logger="UnhandledError"
	W0915 08:12:45.347898       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.83.150:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.150:8443: connect: connection refused
	E0915 08:12:45.347951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.83.150:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.83.150:8443: connect: connection refused" logger="UnhandledError"
	W0915 08:12:53.355593       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.83.150:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.83.150:8443: connect: connection refused
	E0915 08:12:53.355714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.83.150:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.83.150:8443: connect: connection refused" logger="UnhandledError"
	W0915 08:12:55.314459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.83.150:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.83.150:8443: connect: connection refused
	E0915 08:12:55.314507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.83.150:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.150:8443: connect: connection refused" logger="UnhandledError"
	W0915 08:12:57.656696       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.83.150:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.83.150:8443: connect: connection refused
	E0915 08:12:57.656738       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.83.150:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.83.150:8443: connect: connection refused" logger="UnhandledError"
	W0915 08:12:58.176769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.83.150:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.83.150:8443: connect: connection refused
	E0915 08:12:58.176812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.83.150:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.83.150:8443: connect: connection refused" logger="UnhandledError"
	W0915 08:13:00.418344       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.83.150:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.83.150:8443: connect: connection refused
	E0915 08:13:00.418417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.83.150:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.83.150:8443: connect: connection refused" logger="UnhandledError"
	W0915 08:13:08.193546       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.83.150:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.150:8443: connect: connection refused
	E0915 08:13:08.193702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.83.150:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.83.150:8443: connect: connection refused" logger="UnhandledError"
	W0915 08:13:08.311389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.83.150:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.150:8443: connect: connection refused
	E0915 08:13:08.311502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.83.150:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.83.150:8443: connect: connection refused" logger="UnhandledError"
	W0915 08:13:09.237829       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.83.150:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.83.150:8443: connect: connection refused
	E0915 08:13:09.237947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.83.150:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.83.150:8443: connect: connection refused" logger="UnhandledError"
	W0915 08:13:11.859238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.83.150:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.83.150:8443: connect: connection refused
	E0915 08:13:11.859298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.83.150:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.150:8443: connect: connection refused" logger="UnhandledError"
	W0915 08:13:12.232721       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.83.150:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.83.150:8443: connect: connection refused
	E0915 08:13:12.232819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.83.150:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.83.150:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 15 08:12:59 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:12:59.104197   11174 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.83.150:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-669362.17f55c83800e42df  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-669362,UID:kubernetes-upgrade-669362,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node kubernetes-upgrade-669362 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-669362,},FirstTimestamp:2024-09-15 08:09:12.025342687 +0000 UTC m=+0.554424669,LastTimestamp:2024-09-15 08:09:12.025342687 +0000 UTC m=+0.554424669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,Rep
ortingController:kubelet,ReportingInstance:kubernetes-upgrade-669362,}"
	Sep 15 08:13:01 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:01.660976   11174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-669362?timeout=10s\": dial tcp 192.168.83.150:8443: connect: connection refused" interval="7s"
	Sep 15 08:13:01 kubernetes-upgrade-669362 kubelet[11174]: I0915 08:13:01.870590   11174 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-669362"
	Sep 15 08:13:01 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:01.871891   11174 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.150:8443: connect: connection refused" node="kubernetes-upgrade-669362"
	Sep 15 08:13:02 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:02.106869   11174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726387982106508977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 08:13:02 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:02.106906   11174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726387982106508977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 08:13:04 kubernetes-upgrade-669362 kubelet[11174]: W0915 08:13:04.086672   11174 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-669362&limit=500&resourceVersion=0": dial tcp 192.168.83.150:8443: connect: connection refused
	Sep 15 08:13:04 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:04.087069   11174 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-669362&limit=500&resourceVersion=0\": dial tcp 192.168.83.150:8443: connect: connection refused" logger="UnhandledError"
	Sep 15 08:13:07 kubernetes-upgrade-669362 kubelet[11174]: I0915 08:13:07.033278   11174 scope.go:117] "RemoveContainer" containerID="713128e58445eb1b2cbaecd3f8cd6199c3d67f3b2ce3c8d18658060782ea723d"
	Sep 15 08:13:07 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:07.033703   11174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-669362_kube-system(da8248da4a7151635a273d5085bd0429)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-669362" podUID="da8248da4a7151635a273d5085bd0429"
	Sep 15 08:13:08 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:08.662782   11174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-669362?timeout=10s\": dial tcp 192.168.83.150:8443: connect: connection refused" interval="7s"
	Sep 15 08:13:08 kubernetes-upgrade-669362 kubelet[11174]: I0915 08:13:08.873451   11174 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-669362"
	Sep 15 08:13:08 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:08.874382   11174 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.150:8443: connect: connection refused" node="kubernetes-upgrade-669362"
	Sep 15 08:13:09 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:09.106834   11174 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.83.150:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-669362.17f55c83800e42df  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-669362,UID:kubernetes-upgrade-669362,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node kubernetes-upgrade-669362 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-669362,},FirstTimestamp:2024-09-15 08:09:12.025342687 +0000 UTC m=+0.554424669,LastTimestamp:2024-09-15 08:09:12.025342687 +0000 UTC m=+0.554424669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,Rep
ortingController:kubelet,ReportingInstance:kubernetes-upgrade-669362,}"
	Sep 15 08:13:10 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:10.040597   11174 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-669362_kube-system_b1cf429a4aa346cc1c50ed9eba0fb9f6_1\" is already in use by 55a4b9d309f27d1c2ec4554dbcea986c094ec8fa85b1e8749cc3d49383a4880c. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="fda9bb83a4c4211d09e30bc0ff48df48fc181ae65fde329eda1a5d13d047b100"
	Sep 15 08:13:10 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:10.040761   11174 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-apiserver,Image:registry.k8s.io/kube-apiserver:v1.31.1,Command:[kube-apiserver --advertise-address=192.168.83.150 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/var/lib/minikube/certs/ca.crt --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --enable-bootstrap-token-auth=true --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key --kubelet-preferr
ed-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=8443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/minikube/certs/sa.pub --service-account-signing-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/var/lib/minikube/certs/apiserver.crt --tls-private-key-file=/var/lib/minikube/certs/apiserver.key],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {<nil>} 250m DecimalSI},},Claims:[]ResourceClai
m{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8443 },Host:192.168.83.150,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8443 },Host:192.168.83.150,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSecon
ds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8443 },Host:192.168.83.150,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-apiserver-kubernetes-upgrade-669362_kube-system(b1cf429a4aa346cc1c50ed9eba0fb9f6): CreateContainerError: the container name \"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-669362_kube-system_b1cf429a4aa346cc1c50ed9eba0fb9f6_1\" is already in use by 55a4b9d309f27d1c2ec4554dbcea986
c094ec8fa85b1e8749cc3d49383a4880c. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Sep 15 08:13:10 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:10.042003   11174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"the container name \\\"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-669362_kube-system_b1cf429a4aa346cc1c50ed9eba0fb9f6_1\\\" is already in use by 55a4b9d309f27d1c2ec4554dbcea986c094ec8fa85b1e8749cc3d49383a4880c. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-669362" podUID="b1cf429a4aa346cc1c50ed9eba0fb9f6"
	Sep 15 08:13:12 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:12.047082   11174 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 15 08:13:12 kubernetes-upgrade-669362 kubelet[11174]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 15 08:13:12 kubernetes-upgrade-669362 kubelet[11174]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 15 08:13:12 kubernetes-upgrade-669362 kubelet[11174]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 15 08:13:12 kubernetes-upgrade-669362 kubelet[11174]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 15 08:13:12 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:12.108914   11174 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726387992108580234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 08:13:12 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:12.108940   11174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726387992108580234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 08:13:15 kubernetes-upgrade-669362 kubelet[11174]: E0915 08:13:15.664467   11174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-669362?timeout=10s\": dial tcp 192.168.83.150:8443: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-669362 -n kubernetes-upgrade-669362
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-669362 -n kubernetes-upgrade-669362: exit status 2 (224.810137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-669362" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-669362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-669362
--- FAIL: TestKubernetesUpgrade (1224.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.36s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-742219 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0915 07:54:19.272423   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-742219 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.419831209s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-742219] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-742219" primary control-plane node in "pause-742219" cluster
	* Updating the running kvm2 "pause-742219" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-742219" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:54:05.438208   52982 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:54:05.438368   52982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:54:05.438378   52982 out.go:358] Setting ErrFile to fd 2...
	I0915 07:54:05.438387   52982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:54:05.440926   52982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:54:05.441741   52982 out.go:352] Setting JSON to false
	I0915 07:54:05.442972   52982 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5791,"bootTime":1726381054,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 07:54:05.443093   52982 start.go:139] virtualization: kvm guest
	I0915 07:54:05.445240   52982 out.go:177] * [pause-742219] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 07:54:05.447486   52982 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:54:05.447507   52982 notify.go:220] Checking for updates...
	I0915 07:54:05.450974   52982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:54:05.452479   52982 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:54:05.453888   52982 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:54:05.455244   52982 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 07:54:05.456753   52982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:54:05.458611   52982 config.go:182] Loaded profile config "pause-742219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:54:05.459215   52982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:54:05.459276   52982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:54:05.483769   52982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I0915 07:54:05.484403   52982 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:54:05.485128   52982 main.go:141] libmachine: Using API Version  1
	I0915 07:54:05.485156   52982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:54:05.485590   52982 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:54:05.485767   52982 main.go:141] libmachine: (pause-742219) Calling .DriverName
	I0915 07:54:05.486075   52982 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:54:05.486388   52982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:54:05.486427   52982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:54:05.506691   52982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
	I0915 07:54:05.507201   52982 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:54:05.507739   52982 main.go:141] libmachine: Using API Version  1
	I0915 07:54:05.507761   52982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:54:05.508168   52982 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:54:05.508337   52982 main.go:141] libmachine: (pause-742219) Calling .DriverName
	I0915 07:54:05.551266   52982 out.go:177] * Using the kvm2 driver based on existing profile
	I0915 07:54:05.552680   52982 start.go:297] selected driver: kvm2
	I0915 07:54:05.552699   52982 start.go:901] validating driver "kvm2" against &{Name:pause-742219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:pause-742219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:54:05.552885   52982 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:54:05.553363   52982 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:54:05.553482   52982 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 07:54:05.572668   52982 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 07:54:05.573772   52982 cni.go:84] Creating CNI manager for ""
	I0915 07:54:05.573858   52982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 07:54:05.573937   52982 start.go:340] cluster config:
	{Name:pause-742219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-742219 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:54:05.574119   52982 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:54:05.576126   52982 out.go:177] * Starting "pause-742219" primary control-plane node in "pause-742219" cluster
	I0915 07:54:05.577579   52982 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:54:05.577615   52982 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 07:54:05.577640   52982 cache.go:56] Caching tarball of preloaded images
	I0915 07:54:05.577726   52982 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:54:05.577742   52982 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:54:05.577916   52982 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/pause-742219/config.json ...
	I0915 07:54:05.578146   52982 start.go:360] acquireMachinesLock for pause-742219: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:54:05.578225   52982 start.go:364] duration metric: took 40.817µs to acquireMachinesLock for "pause-742219"
	I0915 07:54:05.578244   52982 start.go:96] Skipping create...Using existing machine configuration
	I0915 07:54:05.578260   52982 fix.go:54] fixHost starting: 
	I0915 07:54:05.578714   52982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:54:05.578755   52982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:54:05.597163   52982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44475
	I0915 07:54:05.597510   52982 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:54:05.598052   52982 main.go:141] libmachine: Using API Version  1
	I0915 07:54:05.598080   52982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:54:05.598417   52982 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:54:05.598706   52982 main.go:141] libmachine: (pause-742219) Calling .DriverName
	I0915 07:54:05.598841   52982 main.go:141] libmachine: (pause-742219) Calling .GetState
	I0915 07:54:05.600633   52982 fix.go:112] recreateIfNeeded on pause-742219: state=Running err=<nil>
	W0915 07:54:05.600654   52982 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 07:54:05.602576   52982 out.go:177] * Updating the running kvm2 "pause-742219" VM ...
	I0915 07:54:05.604060   52982 machine.go:93] provisionDockerMachine start ...
	I0915 07:54:05.604079   52982 main.go:141] libmachine: (pause-742219) Calling .DriverName
	I0915 07:54:05.604278   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHHostname
	I0915 07:54:05.607721   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:05.608176   52982 main.go:141] libmachine: (pause-742219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:a6:56", ip: ""} in network mk-pause-742219: {Iface:virbr4 ExpiryTime:2024-09-15 08:52:49 +0000 UTC Type:0 Mac:52:54:00:0f:a6:56 Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:pause-742219 Clientid:01:52:54:00:0f:a6:56}
	I0915 07:54:05.608215   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined IP address 192.168.72.43 and MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:05.608460   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHPort
	I0915 07:54:05.608814   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHKeyPath
	I0915 07:54:05.608964   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHKeyPath
	I0915 07:54:05.609079   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHUsername
	I0915 07:54:05.609247   52982 main.go:141] libmachine: Using SSH client type: native
	I0915 07:54:05.609470   52982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.43 22 <nil> <nil>}
	I0915 07:54:05.609480   52982 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 07:54:05.739747   52982 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-742219
	
	I0915 07:54:05.739778   52982 main.go:141] libmachine: (pause-742219) Calling .GetMachineName
	I0915 07:54:05.740017   52982 buildroot.go:166] provisioning hostname "pause-742219"
	I0915 07:54:05.740045   52982 main.go:141] libmachine: (pause-742219) Calling .GetMachineName
	I0915 07:54:05.740197   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHHostname
	I0915 07:54:05.743680   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:05.744287   52982 main.go:141] libmachine: (pause-742219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:a6:56", ip: ""} in network mk-pause-742219: {Iface:virbr4 ExpiryTime:2024-09-15 08:52:49 +0000 UTC Type:0 Mac:52:54:00:0f:a6:56 Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:pause-742219 Clientid:01:52:54:00:0f:a6:56}
	I0915 07:54:05.744326   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined IP address 192.168.72.43 and MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:05.744557   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHPort
	I0915 07:54:05.744731   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHKeyPath
	I0915 07:54:05.744881   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHKeyPath
	I0915 07:54:05.745043   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHUsername
	I0915 07:54:05.745225   52982 main.go:141] libmachine: Using SSH client type: native
	I0915 07:54:05.745434   52982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.43 22 <nil> <nil>}
	I0915 07:54:05.745462   52982 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-742219 && echo "pause-742219" | sudo tee /etc/hostname
	I0915 07:54:05.895219   52982 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-742219
	
	I0915 07:54:05.895301   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHHostname
	I0915 07:54:05.898838   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:05.899268   52982 main.go:141] libmachine: (pause-742219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:a6:56", ip: ""} in network mk-pause-742219: {Iface:virbr4 ExpiryTime:2024-09-15 08:52:49 +0000 UTC Type:0 Mac:52:54:00:0f:a6:56 Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:pause-742219 Clientid:01:52:54:00:0f:a6:56}
	I0915 07:54:05.899301   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined IP address 192.168.72.43 and MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:05.899616   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHPort
	I0915 07:54:05.899807   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHKeyPath
	I0915 07:54:05.899982   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHKeyPath
	I0915 07:54:05.900113   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHUsername
	I0915 07:54:05.900301   52982 main.go:141] libmachine: Using SSH client type: native
	I0915 07:54:05.900529   52982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.43 22 <nil> <nil>}
	I0915 07:54:05.900549   52982 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-742219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-742219/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-742219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:54:06.024371   52982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:54:06.024404   52982 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 07:54:06.024443   52982 buildroot.go:174] setting up certificates
	I0915 07:54:06.024463   52982 provision.go:84] configureAuth start
	I0915 07:54:06.024473   52982 main.go:141] libmachine: (pause-742219) Calling .GetMachineName
	I0915 07:54:06.024774   52982 main.go:141] libmachine: (pause-742219) Calling .GetIP
	I0915 07:54:06.028395   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:06.028887   52982 main.go:141] libmachine: (pause-742219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:a6:56", ip: ""} in network mk-pause-742219: {Iface:virbr4 ExpiryTime:2024-09-15 08:52:49 +0000 UTC Type:0 Mac:52:54:00:0f:a6:56 Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:pause-742219 Clientid:01:52:54:00:0f:a6:56}
	I0915 07:54:06.028923   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined IP address 192.168.72.43 and MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:06.029197   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHHostname
	I0915 07:54:06.032339   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:06.032731   52982 main.go:141] libmachine: (pause-742219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:a6:56", ip: ""} in network mk-pause-742219: {Iface:virbr4 ExpiryTime:2024-09-15 08:52:49 +0000 UTC Type:0 Mac:52:54:00:0f:a6:56 Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:pause-742219 Clientid:01:52:54:00:0f:a6:56}
	I0915 07:54:06.032755   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined IP address 192.168.72.43 and MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:06.032895   52982 provision.go:143] copyHostCerts
	I0915 07:54:06.032954   52982 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 07:54:06.032965   52982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:54:06.033053   52982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 07:54:06.033194   52982 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 07:54:06.033212   52982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:54:06.033246   52982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 07:54:06.033339   52982 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 07:54:06.033350   52982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:54:06.033377   52982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 07:54:06.033454   52982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.pause-742219 san=[127.0.0.1 192.168.72.43 localhost minikube pause-742219]
	I0915 07:54:06.273946   52982 provision.go:177] copyRemoteCerts
	I0915 07:54:06.274001   52982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:54:06.274023   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHHostname
	I0915 07:54:06.277123   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:06.277621   52982 main.go:141] libmachine: (pause-742219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:a6:56", ip: ""} in network mk-pause-742219: {Iface:virbr4 ExpiryTime:2024-09-15 08:52:49 +0000 UTC Type:0 Mac:52:54:00:0f:a6:56 Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:pause-742219 Clientid:01:52:54:00:0f:a6:56}
	I0915 07:54:06.277661   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined IP address 192.168.72.43 and MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:06.277865   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHPort
	I0915 07:54:06.278079   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHKeyPath
	I0915 07:54:06.278230   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHUsername
	I0915 07:54:06.278376   52982 sshutil.go:53] new ssh client: &{IP:192.168.72.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/pause-742219/id_rsa Username:docker}
	I0915 07:54:06.375561   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:54:06.410069   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 07:54:06.440890   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 07:54:06.480964   52982 provision.go:87] duration metric: took 456.487336ms to configureAuth
	I0915 07:54:06.480997   52982 buildroot.go:189] setting minikube options for container-runtime
	I0915 07:54:06.481291   52982 config.go:182] Loaded profile config "pause-742219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:54:06.481389   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHHostname
	I0915 07:54:06.484974   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:06.485342   52982 main.go:141] libmachine: (pause-742219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:a6:56", ip: ""} in network mk-pause-742219: {Iface:virbr4 ExpiryTime:2024-09-15 08:52:49 +0000 UTC Type:0 Mac:52:54:00:0f:a6:56 Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:pause-742219 Clientid:01:52:54:00:0f:a6:56}
	I0915 07:54:06.485378   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined IP address 192.168.72.43 and MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:06.485525   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHPort
	I0915 07:54:06.485709   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHKeyPath
	I0915 07:54:06.485873   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHKeyPath
	I0915 07:54:06.486006   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHUsername
	I0915 07:54:06.486267   52982 main.go:141] libmachine: Using SSH client type: native
	I0915 07:54:06.486532   52982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.43 22 <nil> <nil>}
	I0915 07:54:06.486573   52982 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:54:12.515102   52982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:54:12.515130   52982 machine.go:96] duration metric: took 6.911055137s to provisionDockerMachine
	I0915 07:54:12.515145   52982 start.go:293] postStartSetup for "pause-742219" (driver="kvm2")
	I0915 07:54:12.515159   52982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:54:12.515180   52982 main.go:141] libmachine: (pause-742219) Calling .DriverName
	I0915 07:54:12.515516   52982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:54:12.515549   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHHostname
	I0915 07:54:12.518313   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:12.518685   52982 main.go:141] libmachine: (pause-742219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:a6:56", ip: ""} in network mk-pause-742219: {Iface:virbr4 ExpiryTime:2024-09-15 08:52:49 +0000 UTC Type:0 Mac:52:54:00:0f:a6:56 Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:pause-742219 Clientid:01:52:54:00:0f:a6:56}
	I0915 07:54:12.518714   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined IP address 192.168.72.43 and MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:12.518897   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHPort
	I0915 07:54:12.519095   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHKeyPath
	I0915 07:54:12.519248   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHUsername
	I0915 07:54:12.519393   52982 sshutil.go:53] new ssh client: &{IP:192.168.72.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/pause-742219/id_rsa Username:docker}
	I0915 07:54:12.605154   52982 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:54:12.609486   52982 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 07:54:12.609512   52982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/addons for local assets ...
	I0915 07:54:12.609593   52982 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-6166/.minikube/files for local assets ...
	I0915 07:54:12.609685   52982 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem -> 131902.pem in /etc/ssl/certs
	I0915 07:54:12.609794   52982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:54:12.619591   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:54:12.643744   52982 start.go:296] duration metric: took 128.584337ms for postStartSetup
	I0915 07:54:12.643788   52982 fix.go:56] duration metric: took 7.065528039s for fixHost
	I0915 07:54:12.643811   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHHostname
	I0915 07:54:12.646518   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:12.646838   52982 main.go:141] libmachine: (pause-742219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:a6:56", ip: ""} in network mk-pause-742219: {Iface:virbr4 ExpiryTime:2024-09-15 08:52:49 +0000 UTC Type:0 Mac:52:54:00:0f:a6:56 Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:pause-742219 Clientid:01:52:54:00:0f:a6:56}
	I0915 07:54:12.646869   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined IP address 192.168.72.43 and MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:12.647084   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHPort
	I0915 07:54:12.647252   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHKeyPath
	I0915 07:54:12.647391   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHKeyPath
	I0915 07:54:12.647523   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHUsername
	I0915 07:54:12.647708   52982 main.go:141] libmachine: Using SSH client type: native
	I0915 07:54:12.647911   52982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.43 22 <nil> <nil>}
	I0915 07:54:12.647923   52982 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 07:54:12.766911   52982 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726386852.756494720
	
	I0915 07:54:12.766932   52982 fix.go:216] guest clock: 1726386852.756494720
	I0915 07:54:12.766942   52982 fix.go:229] Guest: 2024-09-15 07:54:12.75649472 +0000 UTC Remote: 2024-09-15 07:54:12.643792311 +0000 UTC m=+7.250740663 (delta=112.702409ms)
	I0915 07:54:12.766966   52982 fix.go:200] guest clock delta is within tolerance: 112.702409ms
	I0915 07:54:12.766972   52982 start.go:83] releasing machines lock for "pause-742219", held for 7.188735328s
	I0915 07:54:12.766991   52982 main.go:141] libmachine: (pause-742219) Calling .DriverName
	I0915 07:54:12.767233   52982 main.go:141] libmachine: (pause-742219) Calling .GetIP
	I0915 07:54:12.769902   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:12.770256   52982 main.go:141] libmachine: (pause-742219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:a6:56", ip: ""} in network mk-pause-742219: {Iface:virbr4 ExpiryTime:2024-09-15 08:52:49 +0000 UTC Type:0 Mac:52:54:00:0f:a6:56 Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:pause-742219 Clientid:01:52:54:00:0f:a6:56}
	I0915 07:54:12.770280   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined IP address 192.168.72.43 and MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:12.770462   52982 main.go:141] libmachine: (pause-742219) Calling .DriverName
	I0915 07:54:12.770867   52982 main.go:141] libmachine: (pause-742219) Calling .DriverName
	I0915 07:54:12.771036   52982 main.go:141] libmachine: (pause-742219) Calling .DriverName
	I0915 07:54:12.771160   52982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:54:12.771203   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHHostname
	I0915 07:54:12.771266   52982 ssh_runner.go:195] Run: cat /version.json
	I0915 07:54:12.771299   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHHostname
	I0915 07:54:12.773765   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:12.773945   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:12.774147   52982 main.go:141] libmachine: (pause-742219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:a6:56", ip: ""} in network mk-pause-742219: {Iface:virbr4 ExpiryTime:2024-09-15 08:52:49 +0000 UTC Type:0 Mac:52:54:00:0f:a6:56 Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:pause-742219 Clientid:01:52:54:00:0f:a6:56}
	I0915 07:54:12.774172   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined IP address 192.168.72.43 and MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:12.774299   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHPort
	I0915 07:54:12.774420   52982 main.go:141] libmachine: (pause-742219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:a6:56", ip: ""} in network mk-pause-742219: {Iface:virbr4 ExpiryTime:2024-09-15 08:52:49 +0000 UTC Type:0 Mac:52:54:00:0f:a6:56 Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:pause-742219 Clientid:01:52:54:00:0f:a6:56}
	I0915 07:54:12.774447   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined IP address 192.168.72.43 and MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:12.774454   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHKeyPath
	I0915 07:54:12.774604   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHPort
	I0915 07:54:12.774606   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHUsername
	I0915 07:54:12.774745   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHKeyPath
	I0915 07:54:12.774764   52982 sshutil.go:53] new ssh client: &{IP:192.168.72.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/pause-742219/id_rsa Username:docker}
	I0915 07:54:12.774882   52982 main.go:141] libmachine: (pause-742219) Calling .GetSSHUsername
	I0915 07:54:12.775036   52982 sshutil.go:53] new ssh client: &{IP:192.168.72.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/pause-742219/id_rsa Username:docker}
	I0915 07:54:12.855040   52982 ssh_runner.go:195] Run: systemctl --version
	I0915 07:54:12.884470   52982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:54:13.048115   52982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 07:54:13.054118   52982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 07:54:13.054212   52982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:54:13.064559   52982 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0915 07:54:13.064582   52982 start.go:495] detecting cgroup driver to use...
	I0915 07:54:13.064643   52982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:54:13.081226   52982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:54:13.095385   52982 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:54:13.095463   52982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:54:13.110114   52982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:54:13.124131   52982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:54:13.264019   52982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:54:13.399438   52982 docker.go:233] disabling docker service ...
	I0915 07:54:13.399526   52982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:54:13.421023   52982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:54:13.435310   52982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:54:13.575859   52982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:54:13.713758   52982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:54:13.750740   52982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:54:13.788718   52982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:54:13.788801   52982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:54:13.904655   52982 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:54:13.904737   52982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:54:13.979605   52982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:54:14.012272   52982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:54:14.079841   52982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:54:14.170834   52982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:54:14.211990   52982 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:54:14.295212   52982 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:54:14.365829   52982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:54:14.420272   52982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:54:14.456117   52982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:54:14.726494   52982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:54:15.305014   52982 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:54:15.305090   52982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:54:15.310477   52982 start.go:563] Will wait 60s for crictl version
	I0915 07:54:15.310540   52982 ssh_runner.go:195] Run: which crictl
	I0915 07:54:15.314529   52982 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:54:15.358338   52982 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0915 07:54:15.358422   52982 ssh_runner.go:195] Run: crio --version
	I0915 07:54:15.391968   52982 ssh_runner.go:195] Run: crio --version
	I0915 07:54:15.425621   52982 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0915 07:54:15.427222   52982 main.go:141] libmachine: (pause-742219) Calling .GetIP
	I0915 07:54:15.430021   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:15.430341   52982 main.go:141] libmachine: (pause-742219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:a6:56", ip: ""} in network mk-pause-742219: {Iface:virbr4 ExpiryTime:2024-09-15 08:52:49 +0000 UTC Type:0 Mac:52:54:00:0f:a6:56 Iaid: IPaddr:192.168.72.43 Prefix:24 Hostname:pause-742219 Clientid:01:52:54:00:0f:a6:56}
	I0915 07:54:15.430392   52982 main.go:141] libmachine: (pause-742219) DBG | domain pause-742219 has defined IP address 192.168.72.43 and MAC address 52:54:00:0f:a6:56 in network mk-pause-742219
	I0915 07:54:15.430611   52982 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0915 07:54:15.435648   52982 kubeadm.go:883] updating cluster {Name:pause-742219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-742219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 07:54:15.435796   52982 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:54:15.435868   52982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:54:15.479577   52982 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 07:54:15.479601   52982 crio.go:433] Images already preloaded, skipping extraction
	I0915 07:54:15.479653   52982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:54:15.515612   52982 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 07:54:15.515635   52982 cache_images.go:84] Images are preloaded, skipping loading
	I0915 07:54:15.515648   52982 kubeadm.go:934] updating node { 192.168.72.43 8443 v1.31.1 crio true true} ...
	I0915 07:54:15.515738   52982 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-742219 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-742219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:54:15.515796   52982 ssh_runner.go:195] Run: crio config
	I0915 07:54:15.566517   52982 cni.go:84] Creating CNI manager for ""
	I0915 07:54:15.566547   52982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 07:54:15.566559   52982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 07:54:15.566584   52982 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.43 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-742219 NodeName:pause-742219 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 07:54:15.566770   52982 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-742219"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.43
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.43"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 07:54:15.566839   52982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 07:54:15.579475   52982 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 07:54:15.579811   52982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 07:54:15.592496   52982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0915 07:54:15.611351   52982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:54:15.630409   52982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0915 07:54:15.649227   52982 ssh_runner.go:195] Run: grep 192.168.72.43	control-plane.minikube.internal$ /etc/hosts
	I0915 07:54:15.653370   52982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:54:15.801233   52982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:54:15.820407   52982 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/pause-742219 for IP: 192.168.72.43
	I0915 07:54:15.820432   52982 certs.go:194] generating shared ca certs ...
	I0915 07:54:15.820451   52982 certs.go:226] acquiring lock for ca certs: {Name:mkd61417962f05b87745d77eb92646f7fe8b4029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:54:15.820634   52982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key
	I0915 07:54:15.820702   52982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key
	I0915 07:54:15.820714   52982 certs.go:256] generating profile certs ...
	I0915 07:54:15.820824   52982 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/pause-742219/client.key
	I0915 07:54:15.820889   52982 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/pause-742219/apiserver.key.38a6f6db
	I0915 07:54:15.820938   52982 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/pause-742219/proxy-client.key
	I0915 07:54:15.821090   52982 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem (1338 bytes)
	W0915 07:54:15.821238   52982 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190_empty.pem, impossibly tiny 0 bytes
	I0915 07:54:15.821265   52982 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:54:15.821309   52982 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:54:15.821344   52982 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:54:15.821377   52982 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem (1679 bytes)
	I0915 07:54:15.821421   52982 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem (1708 bytes)
	I0915 07:54:15.822103   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:54:15.893624   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:54:16.103098   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:54:16.316319   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 07:54:16.380592   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/pause-742219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 07:54:16.450599   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/pause-742219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 07:54:16.514168   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/pause-742219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 07:54:16.565252   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/pause-742219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 07:54:16.603981   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/ssl/certs/131902.pem --> /usr/share/ca-certificates/131902.pem (1708 bytes)
	I0915 07:54:16.638538   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:54:16.668707   52982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/13190.pem --> /usr/share/ca-certificates/13190.pem (1338 bytes)
	I0915 07:54:16.707008   52982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 07:54:16.749595   52982 ssh_runner.go:195] Run: openssl version
	I0915 07:54:16.756654   52982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131902.pem && ln -fs /usr/share/ca-certificates/131902.pem /etc/ssl/certs/131902.pem"
	I0915 07:54:16.771966   52982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131902.pem
	I0915 07:54:16.777326   52982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:48 /usr/share/ca-certificates/131902.pem
	I0915 07:54:16.777399   52982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131902.pem
	I0915 07:54:16.783798   52982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131902.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:54:16.797500   52982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:54:16.811972   52982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:54:16.816595   52982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:31 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:54:16.816697   52982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:54:16.823168   52982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:54:16.834698   52982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190.pem && ln -fs /usr/share/ca-certificates/13190.pem /etc/ssl/certs/13190.pem"
	I0915 07:54:16.848203   52982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190.pem
	I0915 07:54:16.853182   52982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:48 /usr/share/ca-certificates/13190.pem
	I0915 07:54:16.853247   52982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190.pem
	I0915 07:54:16.859591   52982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13190.pem /etc/ssl/certs/51391683.0"
	I0915 07:54:16.872690   52982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:54:16.877945   52982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 07:54:16.884644   52982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 07:54:16.891140   52982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 07:54:16.897084   52982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 07:54:16.903633   52982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 07:54:16.911723   52982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 07:54:16.917750   52982 kubeadm.go:392] StartCluster: {Name:pause-742219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-742219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:54:16.917931   52982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 07:54:16.918003   52982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 07:54:16.961523   52982 cri.go:89] found id: "dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85"
	I0915 07:54:16.961553   52982 cri.go:89] found id: "97362824ba1f0684ad3030bd3e7876e33bb5506258c1f1057f0286118a5d5186"
	I0915 07:54:16.961560   52982 cri.go:89] found id: "8a92741efd4a1309a3a9b55dab15f2dedda2a36961e396f4cfa0fd564eacc280"
	I0915 07:54:16.961565   52982 cri.go:89] found id: "cdb50f4eaa6a0ec69d898eb7021e96cef92373f04aaa72d80d142fe4672cb670"
	I0915 07:54:16.961569   52982 cri.go:89] found id: "e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b"
	I0915 07:54:16.961573   52982 cri.go:89] found id: "676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673"
	I0915 07:54:16.961577   52982 cri.go:89] found id: "2c4bd4e283c76199843eb91ba57d5c2bccdd4277de781d655f74098aa5ba7bbc"
	I0915 07:54:16.961581   52982 cri.go:89] found id: ""
	I0915 07:54:16.961634   52982 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-742219 -n pause-742219
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-742219 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-742219 logs -n 25: (1.349041614s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p test-preload-514007         | test-preload-514007       | jenkins | v1.34.0 | 15 Sep 24 07:47 UTC |                     |
	| delete  | -p test-preload-514007         | test-preload-514007       | jenkins | v1.34.0 | 15 Sep 24 07:49 UTC | 15 Sep 24 07:49 UTC |
	| start   | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:49 UTC | 15 Sep 24 07:50 UTC |
	|         | --memory=2048 --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC | 15 Sep 24 07:50 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC | 15 Sep 24 07:50 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:51 UTC | 15 Sep 24 07:51 UTC |
	| start   | -p force-systemd-env-756859    | force-systemd-env-756859  | jenkins | v1.34.0 | 15 Sep 24 07:51 UTC | 15 Sep 24 07:52 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p cert-expiration-773617      | cert-expiration-773617    | jenkins | v1.34.0 | 15 Sep 24 07:51 UTC | 15 Sep 24 07:52 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --cert-expiration=3m           |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-742219 --memory=2048  | pause-742219              | jenkins | v1.34.0 | 15 Sep 24 07:51 UTC | 15 Sep 24 07:54 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-727172         | offline-crio-727172       | jenkins | v1.34.0 | 15 Sep 24 07:51 UTC | 15 Sep 24 07:53 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-756859    | force-systemd-env-756859  | jenkins | v1.34.0 | 15 Sep 24 07:52 UTC | 15 Sep 24 07:52 UTC |
	| start   | -p kubernetes-upgrade-669362   | kubernetes-upgrade-669362 | jenkins | v1.34.0 | 15 Sep 24 07:52 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-727172         | offline-crio-727172       | jenkins | v1.34.0 | 15 Sep 24 07:53 UTC | 15 Sep 24 07:53 UTC |
	| start   | -p stopped-upgrade-112030      | minikube                  | jenkins | v1.26.0 | 15 Sep 24 07:53 UTC | 15 Sep 24 07:54 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p pause-742219                | pause-742219              | jenkins | v1.34.0 | 15 Sep 24 07:54 UTC | 15 Sep 24 07:54 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-112030 stop    | minikube                  | jenkins | v1.26.0 | 15 Sep 24 07:54 UTC | 15 Sep 24 07:54 UTC |
	| start   | -p stopped-upgrade-112030      | stopped-upgrade-112030    | jenkins | v1.34.0 | 15 Sep 24 07:54 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 07:54:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 07:54:28.843810   53205 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:54:28.844152   53205 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:54:28.844167   53205 out.go:358] Setting ErrFile to fd 2...
	I0915 07:54:28.844176   53205 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:54:28.844447   53205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:54:28.845290   53205 out.go:352] Setting JSON to false
	I0915 07:54:28.846538   53205 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5815,"bootTime":1726381054,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 07:54:28.846665   53205 start.go:139] virtualization: kvm guest
	I0915 07:54:28.849191   53205 out.go:177] * [stopped-upgrade-112030] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 07:54:28.850858   53205 notify.go:220] Checking for updates...
	I0915 07:54:28.850878   53205 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:54:28.852304   53205 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:54:28.853744   53205 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:54:28.855189   53205 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:54:28.856602   53205 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 07:54:28.858012   53205 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:54:28.860105   53205 config.go:182] Loaded profile config "stopped-upgrade-112030": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0915 07:54:28.860704   53205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:54:28.860756   53205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:54:28.877567   53205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46165
	I0915 07:54:28.878089   53205 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:54:28.878671   53205 main.go:141] libmachine: Using API Version  1
	I0915 07:54:28.878716   53205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:54:28.879107   53205 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:54:28.879287   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .DriverName
	I0915 07:54:28.881306   53205 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0915 07:54:28.882812   53205 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:54:28.883125   53205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:54:28.883173   53205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:54:28.897922   53205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43457
	I0915 07:54:28.898454   53205 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:54:28.898941   53205 main.go:141] libmachine: Using API Version  1
	I0915 07:54:28.898961   53205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:54:28.899308   53205 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:54:28.899519   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .DriverName
	I0915 07:54:28.938478   53205 out.go:177] * Using the kvm2 driver based on existing profile
	I0915 07:54:28.939924   53205 start.go:297] selected driver: kvm2
	I0915 07:54:28.939939   53205 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-112030 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-112
030 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.194 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0915 07:54:28.940040   53205 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:54:28.940701   53205 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:54:28.940773   53205 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 07:54:28.957435   53205 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 07:54:28.957924   53205 cni.go:84] Creating CNI manager for ""
	I0915 07:54:28.957991   53205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 07:54:28.958066   53205 start.go:340] cluster config:
	{Name:stopped-upgrade-112030 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-112030 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.194 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0915 07:54:28.958201   53205 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:54:28.960147   53205 out.go:177] * Starting "stopped-upgrade-112030" primary control-plane node in "stopped-upgrade-112030" cluster
	I0915 07:54:28.961670   53205 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0915 07:54:28.961729   53205 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0915 07:54:28.961740   53205 cache.go:56] Caching tarball of preloaded images
	I0915 07:54:28.961870   53205 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:54:28.961884   53205 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0915 07:54:28.962003   53205 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/stopped-upgrade-112030/config.json ...
	I0915 07:54:28.962230   53205 start.go:360] acquireMachinesLock for stopped-upgrade-112030: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:54:28.962289   53205 start.go:364] duration metric: took 34.525µs to acquireMachinesLock for "stopped-upgrade-112030"
	I0915 07:54:28.962310   53205 start.go:96] Skipping create...Using existing machine configuration
	I0915 07:54:28.962319   53205 fix.go:54] fixHost starting: 
	I0915 07:54:28.962727   53205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:54:28.962762   53205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:54:28.978308   53205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35677
	I0915 07:54:28.978847   53205 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:54:28.979413   53205 main.go:141] libmachine: Using API Version  1
	I0915 07:54:28.979440   53205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:54:28.979752   53205 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:54:28.979961   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .DriverName
	I0915 07:54:28.980112   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetState
	I0915 07:54:28.981707   53205 fix.go:112] recreateIfNeeded on stopped-upgrade-112030: state=Stopped err=<nil>
	I0915 07:54:28.981755   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .DriverName
	W0915 07:54:28.981944   53205 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 07:54:28.983882   53205 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-112030" ...
	I0915 07:54:26.913031   52982 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85 97362824ba1f0684ad3030bd3e7876e33bb5506258c1f1057f0286118a5d5186 8a92741efd4a1309a3a9b55dab15f2dedda2a36961e396f4cfa0fd564eacc280 cdb50f4eaa6a0ec69d898eb7021e96cef92373f04aaa72d80d142fe4672cb670 e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b 676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673 2c4bd4e283c76199843eb91ba57d5c2bccdd4277de781d655f74098aa5ba7bbc: (9.774508232s)
	I0915 07:54:26.913138   52982 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0915 07:54:26.960013   52982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 07:54:26.971488   52982 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Sep 15 07:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Sep 15 07:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Sep 15 07:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Sep 15 07:53 /etc/kubernetes/scheduler.conf
	
	I0915 07:54:26.971559   52982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 07:54:26.982230   52982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 07:54:26.991944   52982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 07:54:27.006142   52982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:54:27.006221   52982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 07:54:27.016916   52982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 07:54:27.030263   52982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:54:27.030337   52982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 07:54:27.040246   52982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 07:54:27.050121   52982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 07:54:27.158579   52982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 07:54:28.281672   52982 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.123056746s)
	I0915 07:54:28.281709   52982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0915 07:54:28.636723   52982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 07:54:28.821934   52982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0915 07:54:29.074586   52982 api_server.go:52] waiting for apiserver process to appear ...
	I0915 07:54:29.074699   52982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:54:29.128830   52982 api_server.go:72] duration metric: took 54.241279ms to wait for apiserver process to appear ...
	I0915 07:54:29.128862   52982 api_server.go:88] waiting for apiserver healthz status ...
	I0915 07:54:29.128889   52982 api_server.go:253] Checking apiserver healthz at https://192.168.72.43:8443/healthz ...
	I0915 07:54:29.259934   52982 api_server.go:279] https://192.168.72.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0915 07:54:29.259962   52982 api_server.go:103] status: https://192.168.72.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0915 07:54:29.629564   52982 api_server.go:253] Checking apiserver healthz at https://192.168.72.43:8443/healthz ...
	I0915 07:54:29.634769   52982 api_server.go:279] https://192.168.72.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0915 07:54:29.634799   52982 api_server.go:103] status: https://192.168.72.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0915 07:54:30.129020   52982 api_server.go:253] Checking apiserver healthz at https://192.168.72.43:8443/healthz ...
	I0915 07:54:30.162790   52982 api_server.go:279] https://192.168.72.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0915 07:54:30.162847   52982 api_server.go:103] status: https://192.168.72.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0915 07:54:30.629891   52982 api_server.go:253] Checking apiserver healthz at https://192.168.72.43:8443/healthz ...
	I0915 07:54:30.643529   52982 api_server.go:279] https://192.168.72.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0915 07:54:30.643576   52982 api_server.go:103] status: https://192.168.72.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0915 07:54:31.129935   52982 api_server.go:253] Checking apiserver healthz at https://192.168.72.43:8443/healthz ...
	I0915 07:54:31.148255   52982 api_server.go:279] https://192.168.72.43:8443/healthz returned 200:
	ok
	I0915 07:54:31.169695   52982 api_server.go:141] control plane version: v1.31.1
	I0915 07:54:31.169731   52982 api_server.go:131] duration metric: took 2.040860538s to wait for apiserver health ...
	I0915 07:54:31.169742   52982 cni.go:84] Creating CNI manager for ""
	I0915 07:54:31.169750   52982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 07:54:31.171584   52982 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 07:54:28.985643   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .Start
	I0915 07:54:28.985831   53205 main.go:141] libmachine: (stopped-upgrade-112030) Ensuring networks are active...
	I0915 07:54:28.986692   53205 main.go:141] libmachine: (stopped-upgrade-112030) Ensuring network default is active
	I0915 07:54:28.987025   53205 main.go:141] libmachine: (stopped-upgrade-112030) Ensuring network mk-stopped-upgrade-112030 is active
	I0915 07:54:28.987477   53205 main.go:141] libmachine: (stopped-upgrade-112030) Getting domain xml...
	I0915 07:54:28.988280   53205 main.go:141] libmachine: (stopped-upgrade-112030) Creating domain...
	I0915 07:54:30.282380   53205 main.go:141] libmachine: (stopped-upgrade-112030) Waiting to get IP...
	I0915 07:54:30.283113   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:30.283647   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:30.283734   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:30.283636   53239 retry.go:31] will retry after 238.646293ms: waiting for machine to come up
	I0915 07:54:30.524042   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:30.524717   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:30.524745   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:30.524665   53239 retry.go:31] will retry after 253.231975ms: waiting for machine to come up
	I0915 07:54:30.780267   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:30.780804   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:30.780831   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:30.780750   53239 retry.go:31] will retry after 380.943272ms: waiting for machine to come up
	I0915 07:54:31.163240   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:31.163716   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:31.163742   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:31.163687   53239 retry.go:31] will retry after 558.074606ms: waiting for machine to come up
	I0915 07:54:31.722807   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:31.723256   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:31.723282   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:31.723209   53239 retry.go:31] will retry after 655.819426ms: waiting for machine to come up
	I0915 07:54:32.381273   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:32.381856   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:32.381879   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:32.381780   53239 retry.go:31] will retry after 759.800455ms: waiting for machine to come up
	I0915 07:54:33.143298   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:33.143768   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:33.143785   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:33.143737   53239 retry.go:31] will retry after 748.789844ms: waiting for machine to come up
	I0915 07:54:32.699466   52215 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 07:54:32.699683   52215 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 07:54:31.172993   52982 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 07:54:31.187690   52982 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 07:54:31.217948   52982 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 07:54:31.218036   52982 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0915 07:54:31.218055   52982 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0915 07:54:31.233832   52982 system_pods.go:59] 6 kube-system pods found
	I0915 07:54:31.233872   52982 system_pods.go:61] "coredns-7c65d6cfc9-8ngzs" [4ec3a715-c19d-4fe0-88e5-d2b36fc56640] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0915 07:54:31.233883   52982 system_pods.go:61] "etcd-pause-742219" [cc120ac0-39e0-4dcd-93d1-6c6547740b5d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0915 07:54:31.233892   52982 system_pods.go:61] "kube-apiserver-pause-742219" [40f4e787-0eb1-4f72-926d-8913fa27752f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0915 07:54:31.233908   52982 system_pods.go:61] "kube-controller-manager-pause-742219" [7deb3134-b61a-4b78-a29c-5b2d7e76cdce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0915 07:54:31.233917   52982 system_pods.go:61] "kube-proxy-8dd9x" [6a4779f5-5b4b-42d4-919a-025dcc1b52a5] Running
	I0915 07:54:31.233929   52982 system_pods.go:61] "kube-scheduler-pause-742219" [ca2cff60-799b-44db-8f00-99fbfd1b1caa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0915 07:54:31.233950   52982 system_pods.go:74] duration metric: took 15.974139ms to wait for pod list to return data ...
	I0915 07:54:31.233996   52982 node_conditions.go:102] verifying NodePressure condition ...
	I0915 07:54:31.239807   52982 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 07:54:31.239833   52982 node_conditions.go:123] node cpu capacity is 2
	I0915 07:54:31.239842   52982 node_conditions.go:105] duration metric: took 5.842012ms to run NodePressure ...
	I0915 07:54:31.239857   52982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 07:54:31.516949   52982 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0915 07:54:31.521312   52982 kubeadm.go:739] kubelet initialised
	I0915 07:54:31.521339   52982 kubeadm.go:740] duration metric: took 4.360349ms waiting for restarted kubelet to initialise ...
	I0915 07:54:31.521349   52982 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:54:31.526002   52982 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8ngzs" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:33.536362   52982 pod_ready.go:93] pod "coredns-7c65d6cfc9-8ngzs" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:33.536385   52982 pod_ready.go:82] duration metric: took 2.010359533s for pod "coredns-7c65d6cfc9-8ngzs" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:33.536394   52982 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:33.893647   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:33.894074   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:33.894097   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:33.894040   53239 retry.go:31] will retry after 902.669448ms: waiting for machine to come up
	I0915 07:54:34.798020   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:34.798455   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:34.798493   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:34.798415   53239 retry.go:31] will retry after 1.617204726s: waiting for machine to come up
	I0915 07:54:36.417677   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:36.418224   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:36.418243   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:36.418192   53239 retry.go:31] will retry after 2.105966246s: waiting for machine to come up
	I0915 07:54:38.526641   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:38.527180   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:38.527208   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:38.527136   53239 retry.go:31] will retry after 2.700403276s: waiting for machine to come up
	I0915 07:54:35.542830   52982 pod_ready.go:103] pod "etcd-pause-742219" in "kube-system" namespace has status "Ready":"False"
	I0915 07:54:37.543810   52982 pod_ready.go:103] pod "etcd-pause-742219" in "kube-system" namespace has status "Ready":"False"
	I0915 07:54:40.044074   52982 pod_ready.go:103] pod "etcd-pause-742219" in "kube-system" namespace has status "Ready":"False"
	I0915 07:54:41.043416   52982 pod_ready.go:93] pod "etcd-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:41.043452   52982 pod_ready.go:82] duration metric: took 7.507051455s for pod "etcd-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:41.043465   52982 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.050531   52982 pod_ready.go:103] pod "kube-apiserver-pause-742219" in "kube-system" namespace has status "Ready":"False"
	I0915 07:54:43.549261   52982 pod_ready.go:93] pod "kube-apiserver-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:43.549281   52982 pod_ready.go:82] duration metric: took 2.505809623s for pod "kube-apiserver-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.549291   52982 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.554425   52982 pod_ready.go:93] pod "kube-controller-manager-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:43.554447   52982 pod_ready.go:82] duration metric: took 5.147958ms for pod "kube-controller-manager-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.554458   52982 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8dd9x" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.559138   52982 pod_ready.go:93] pod "kube-proxy-8dd9x" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:43.559158   52982 pod_ready.go:82] duration metric: took 4.691838ms for pod "kube-proxy-8dd9x" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.559168   52982 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.565697   52982 pod_ready.go:93] pod "kube-scheduler-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:43.565736   52982 pod_ready.go:82] duration metric: took 6.544377ms for pod "kube-scheduler-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.565746   52982 pod_ready.go:39] duration metric: took 12.044385721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:54:43.565767   52982 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 07:54:43.578359   52982 ops.go:34] apiserver oom_adj: -16
	I0915 07:54:43.578378   52982 kubeadm.go:597] duration metric: took 26.561998369s to restartPrimaryControlPlane
	I0915 07:54:43.578388   52982 kubeadm.go:394] duration metric: took 26.660645722s to StartCluster
	I0915 07:54:43.578406   52982 settings.go:142] acquiring lock: {Name:mkf5235d72fa0db4ee272126c244284fe5de298a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:54:43.578495   52982 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:54:43.579329   52982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/kubeconfig: {Name:mk08ce8dc701ab8f3c73b1f6ae730da0fbf561bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:54:43.579538   52982 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:54:43.579606   52982 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0915 07:54:43.579795   52982 config.go:182] Loaded profile config "pause-742219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:54:43.582150   52982 out.go:177] * Verifying Kubernetes components...
	I0915 07:54:43.582150   52982 out.go:177] * Enabled addons: 
	I0915 07:54:41.228784   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:41.229305   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:41.229335   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:41.229251   53239 retry.go:31] will retry after 2.30208194s: waiting for machine to come up
	I0915 07:54:43.532679   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:43.533255   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:43.533298   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:43.533210   53239 retry.go:31] will retry after 4.528304537s: waiting for machine to come up
	I0915 07:54:43.584015   52982 addons.go:510] duration metric: took 4.414093ms for enable addons: enabled=[]
	I0915 07:54:43.584045   52982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:54:43.748895   52982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:54:43.765270   52982 node_ready.go:35] waiting up to 6m0s for node "pause-742219" to be "Ready" ...
	I0915 07:54:43.768569   52982 node_ready.go:49] node "pause-742219" has status "Ready":"True"
	I0915 07:54:43.768591   52982 node_ready.go:38] duration metric: took 3.282171ms for node "pause-742219" to be "Ready" ...
	I0915 07:54:43.768600   52982 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:54:43.773460   52982 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8ngzs" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.948167   52982 pod_ready.go:93] pod "coredns-7c65d6cfc9-8ngzs" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:43.948198   52982 pod_ready.go:82] duration metric: took 174.713586ms for pod "coredns-7c65d6cfc9-8ngzs" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.948212   52982 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:44.347157   52982 pod_ready.go:93] pod "etcd-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:44.347194   52982 pod_ready.go:82] duration metric: took 398.962807ms for pod "etcd-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:44.347207   52982 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:44.747540   52982 pod_ready.go:93] pod "kube-apiserver-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:44.747568   52982 pod_ready.go:82] duration metric: took 400.35265ms for pod "kube-apiserver-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:44.747582   52982 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:45.147517   52982 pod_ready.go:93] pod "kube-controller-manager-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:45.147541   52982 pod_ready.go:82] duration metric: took 399.951602ms for pod "kube-controller-manager-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:45.147553   52982 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8dd9x" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:45.546718   52982 pod_ready.go:93] pod "kube-proxy-8dd9x" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:45.546741   52982 pod_ready.go:82] duration metric: took 399.18215ms for pod "kube-proxy-8dd9x" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:45.546751   52982 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:45.947380   52982 pod_ready.go:93] pod "kube-scheduler-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:45.947403   52982 pod_ready.go:82] duration metric: took 400.635945ms for pod "kube-scheduler-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:45.947414   52982 pod_ready.go:39] duration metric: took 2.178804286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:54:45.947430   52982 api_server.go:52] waiting for apiserver process to appear ...
	I0915 07:54:45.947487   52982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:54:45.961677   52982 api_server.go:72] duration metric: took 2.382110261s to wait for apiserver process to appear ...
	I0915 07:54:45.961703   52982 api_server.go:88] waiting for apiserver healthz status ...
	I0915 07:54:45.961719   52982 api_server.go:253] Checking apiserver healthz at https://192.168.72.43:8443/healthz ...
	I0915 07:54:45.967135   52982 api_server.go:279] https://192.168.72.43:8443/healthz returned 200:
	ok
	I0915 07:54:45.968088   52982 api_server.go:141] control plane version: v1.31.1
	I0915 07:54:45.968111   52982 api_server.go:131] duration metric: took 6.401949ms to wait for apiserver health ...
	I0915 07:54:45.968119   52982 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 07:54:46.150099   52982 system_pods.go:59] 6 kube-system pods found
	I0915 07:54:46.150129   52982 system_pods.go:61] "coredns-7c65d6cfc9-8ngzs" [4ec3a715-c19d-4fe0-88e5-d2b36fc56640] Running
	I0915 07:54:46.150136   52982 system_pods.go:61] "etcd-pause-742219" [cc120ac0-39e0-4dcd-93d1-6c6547740b5d] Running
	I0915 07:54:46.150140   52982 system_pods.go:61] "kube-apiserver-pause-742219" [40f4e787-0eb1-4f72-926d-8913fa27752f] Running
	I0915 07:54:46.150144   52982 system_pods.go:61] "kube-controller-manager-pause-742219" [7deb3134-b61a-4b78-a29c-5b2d7e76cdce] Running
	I0915 07:54:46.150147   52982 system_pods.go:61] "kube-proxy-8dd9x" [6a4779f5-5b4b-42d4-919a-025dcc1b52a5] Running
	I0915 07:54:46.150151   52982 system_pods.go:61] "kube-scheduler-pause-742219" [ca2cff60-799b-44db-8f00-99fbfd1b1caa] Running
	I0915 07:54:46.150158   52982 system_pods.go:74] duration metric: took 182.033731ms to wait for pod list to return data ...
	I0915 07:54:46.150167   52982 default_sa.go:34] waiting for default service account to be created ...
	I0915 07:54:46.348082   52982 default_sa.go:45] found service account: "default"
	I0915 07:54:46.348109   52982 default_sa.go:55] duration metric: took 197.935631ms for default service account to be created ...
	I0915 07:54:46.348119   52982 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 07:54:46.549298   52982 system_pods.go:86] 6 kube-system pods found
	I0915 07:54:46.549328   52982 system_pods.go:89] "coredns-7c65d6cfc9-8ngzs" [4ec3a715-c19d-4fe0-88e5-d2b36fc56640] Running
	I0915 07:54:46.549336   52982 system_pods.go:89] "etcd-pause-742219" [cc120ac0-39e0-4dcd-93d1-6c6547740b5d] Running
	I0915 07:54:46.549345   52982 system_pods.go:89] "kube-apiserver-pause-742219" [40f4e787-0eb1-4f72-926d-8913fa27752f] Running
	I0915 07:54:46.549350   52982 system_pods.go:89] "kube-controller-manager-pause-742219" [7deb3134-b61a-4b78-a29c-5b2d7e76cdce] Running
	I0915 07:54:46.549355   52982 system_pods.go:89] "kube-proxy-8dd9x" [6a4779f5-5b4b-42d4-919a-025dcc1b52a5] Running
	I0915 07:54:46.549360   52982 system_pods.go:89] "kube-scheduler-pause-742219" [ca2cff60-799b-44db-8f00-99fbfd1b1caa] Running
	I0915 07:54:46.549368   52982 system_pods.go:126] duration metric: took 201.243199ms to wait for k8s-apps to be running ...
	I0915 07:54:46.549377   52982 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 07:54:46.549427   52982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:54:46.563995   52982 system_svc.go:56] duration metric: took 14.612396ms WaitForService to wait for kubelet
	I0915 07:54:46.564025   52982 kubeadm.go:582] duration metric: took 2.984463974s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:54:46.564046   52982 node_conditions.go:102] verifying NodePressure condition ...
	I0915 07:54:46.748043   52982 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 07:54:46.748085   52982 node_conditions.go:123] node cpu capacity is 2
	I0915 07:54:46.748098   52982 node_conditions.go:105] duration metric: took 184.045523ms to run NodePressure ...
	I0915 07:54:46.748109   52982 start.go:241] waiting for startup goroutines ...
	I0915 07:54:46.748118   52982 start.go:246] waiting for cluster config update ...
	I0915 07:54:46.748128   52982 start.go:255] writing updated cluster config ...
	I0915 07:54:46.748468   52982 ssh_runner.go:195] Run: rm -f paused
	I0915 07:54:46.794117   52982 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 07:54:46.796205   52982 out.go:177] * Done! kubectl is now configured to use "pause-742219" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.428974525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386887428939067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1b5a13d-6ec3-40f4-8698-db59287cd347 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.429704547Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ecdf2fb-7a2b-48a0-a26e-173a79e28fa0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.429760424Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ecdf2fb-7a2b-48a0-a26e-173a79e28fa0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.430166337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84d479066af5af926a4aeb0401dc7e5056c4fd69c1d662a69a97906bd9919f60,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726386870182026991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b6ffb5f912454523811e2320cdbb7d77288bc64b1c1c80657f6a6fb9413463,PodSandboxId:de97d3f4e9eede7dd5eda130b4661fa5a2f46f7de97d60efce9ffba779d17cb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726386870180628315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16900bb163bf9f90e59e4a54efab79083eef6ddec6cb8ad30253c1550eedb695,PodSandboxId:0b73dce7f0b583b8f34cc7db8a13594238905c651bcb19f5e2b329dfaa5015ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726386869729634759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b2e09ba379e801f73846d53108adbb0bf5d59c743d0902c03ffd8c545e15b5,PodSandboxId:f9786ebfdec93db96e99e4b720f9d5dda1a72ae4d0fdcc07aa02c441f1de0ec8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726386865543956155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
29e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d950374b796693fe04c61165104e91bbf9c5c6678cd45552d5a65bcdcea002e,PodSandboxId:7524801c19869d1870ad29e2dc6e569213e3dbd38c95df704df060db2490a0f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726386865488479537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d2112c6f8ec30f31a946723ccc1fead00e085f2d5834dc68efe59fcbe1ac9c,PodSandboxId:1ad16f05d91bb7c96b65b58d71149cab29c6fbbc0ff4eb2ef43ec0bc78e3533f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726386865273503295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726386856633697442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b,PodSandboxId:f5d18c6d14fce65ad0173db88f6dee2670b4f6d378e53885e79a032e0011a339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726386854158384470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97362824ba1f0684ad3030bd3e7876e33bb5506258c1f1057f0286118a5d5186,PodSandboxId:00856cdaaea9ff6fb0f0d6d9b7dfbdd2361625f8003052614f5efd36ff0cec8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726386854297825587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92741efd4a1309a3a9b55dab15f2dedda2a36961e396f4cfa0fd564eacc280,PodSandboxId:c7d1cc7d31818ca0478b6677e099d47c97c3a741a700b5736556ee4f6cc1b903,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726386854262405301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb50f4eaa6a0ec69d898eb7021e96cef92373f04aaa72d80d142fe4672cb670,PodSandboxId:f06672ce3f73a51dcdb5bce707bdfa2fb1e8f07551c6383c35b260e20f1a24c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726386854218636780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 829e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673,PodSandboxId:8fb079d276072a6154176ab5d2df4c559987fb943fd42dfc2734560e2b50b584,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726386853992767093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ecdf2fb-7a2b-48a0-a26e-173a79e28fa0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.475573850Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb504f93-5adf-42f2-a1c3-fa1e48f5434e name=/runtime.v1.RuntimeService/Version
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.475685334Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb504f93-5adf-42f2-a1c3-fa1e48f5434e name=/runtime.v1.RuntimeService/Version
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.476831748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d64c5f10-5714-4487-bfb7-7e283f7dede7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.477317548Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386887477292895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d64c5f10-5714-4487-bfb7-7e283f7dede7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.477834461Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=995d5507-d698-41a1-be74-290a1995ac36 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.477976745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=995d5507-d698-41a1-be74-290a1995ac36 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.478261849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84d479066af5af926a4aeb0401dc7e5056c4fd69c1d662a69a97906bd9919f60,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726386870182026991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b6ffb5f912454523811e2320cdbb7d77288bc64b1c1c80657f6a6fb9413463,PodSandboxId:de97d3f4e9eede7dd5eda130b4661fa5a2f46f7de97d60efce9ffba779d17cb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726386870180628315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16900bb163bf9f90e59e4a54efab79083eef6ddec6cb8ad30253c1550eedb695,PodSandboxId:0b73dce7f0b583b8f34cc7db8a13594238905c651bcb19f5e2b329dfaa5015ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726386869729634759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b2e09ba379e801f73846d53108adbb0bf5d59c743d0902c03ffd8c545e15b5,PodSandboxId:f9786ebfdec93db96e99e4b720f9d5dda1a72ae4d0fdcc07aa02c441f1de0ec8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726386865543956155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
29e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d950374b796693fe04c61165104e91bbf9c5c6678cd45552d5a65bcdcea002e,PodSandboxId:7524801c19869d1870ad29e2dc6e569213e3dbd38c95df704df060db2490a0f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726386865488479537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d2112c6f8ec30f31a946723ccc1fead00e085f2d5834dc68efe59fcbe1ac9c,PodSandboxId:1ad16f05d91bb7c96b65b58d71149cab29c6fbbc0ff4eb2ef43ec0bc78e3533f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726386865273503295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726386856633697442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b,PodSandboxId:f5d18c6d14fce65ad0173db88f6dee2670b4f6d378e53885e79a032e0011a339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726386854158384470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97362824ba1f0684ad3030bd3e7876e33bb5506258c1f1057f0286118a5d5186,PodSandboxId:00856cdaaea9ff6fb0f0d6d9b7dfbdd2361625f8003052614f5efd36ff0cec8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726386854297825587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92741efd4a1309a3a9b55dab15f2dedda2a36961e396f4cfa0fd564eacc280,PodSandboxId:c7d1cc7d31818ca0478b6677e099d47c97c3a741a700b5736556ee4f6cc1b903,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726386854262405301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb50f4eaa6a0ec69d898eb7021e96cef92373f04aaa72d80d142fe4672cb670,PodSandboxId:f06672ce3f73a51dcdb5bce707bdfa2fb1e8f07551c6383c35b260e20f1a24c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726386854218636780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 829e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673,PodSandboxId:8fb079d276072a6154176ab5d2df4c559987fb943fd42dfc2734560e2b50b584,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726386853992767093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=995d5507-d698-41a1-be74-290a1995ac36 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.524630376Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6474939-a6f1-4372-a978-0b6a05935c57 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.524726020Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6474939-a6f1-4372-a978-0b6a05935c57 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.526196276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75b32bda-b965-45cc-b1d5-1719c656f4b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.526662304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386887526634299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75b32bda-b965-45cc-b1d5-1719c656f4b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.527432903Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2467889-efd8-41f5-b9e3-525b91273412 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.527514259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2467889-efd8-41f5-b9e3-525b91273412 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.527924571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84d479066af5af926a4aeb0401dc7e5056c4fd69c1d662a69a97906bd9919f60,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726386870182026991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b6ffb5f912454523811e2320cdbb7d77288bc64b1c1c80657f6a6fb9413463,PodSandboxId:de97d3f4e9eede7dd5eda130b4661fa5a2f46f7de97d60efce9ffba779d17cb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726386870180628315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16900bb163bf9f90e59e4a54efab79083eef6ddec6cb8ad30253c1550eedb695,PodSandboxId:0b73dce7f0b583b8f34cc7db8a13594238905c651bcb19f5e2b329dfaa5015ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726386869729634759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b2e09ba379e801f73846d53108adbb0bf5d59c743d0902c03ffd8c545e15b5,PodSandboxId:f9786ebfdec93db96e99e4b720f9d5dda1a72ae4d0fdcc07aa02c441f1de0ec8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726386865543956155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
29e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d950374b796693fe04c61165104e91bbf9c5c6678cd45552d5a65bcdcea002e,PodSandboxId:7524801c19869d1870ad29e2dc6e569213e3dbd38c95df704df060db2490a0f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726386865488479537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d2112c6f8ec30f31a946723ccc1fead00e085f2d5834dc68efe59fcbe1ac9c,PodSandboxId:1ad16f05d91bb7c96b65b58d71149cab29c6fbbc0ff4eb2ef43ec0bc78e3533f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726386865273503295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726386856633697442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b,PodSandboxId:f5d18c6d14fce65ad0173db88f6dee2670b4f6d378e53885e79a032e0011a339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726386854158384470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97362824ba1f0684ad3030bd3e7876e33bb5506258c1f1057f0286118a5d5186,PodSandboxId:00856cdaaea9ff6fb0f0d6d9b7dfbdd2361625f8003052614f5efd36ff0cec8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726386854297825587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92741efd4a1309a3a9b55dab15f2dedda2a36961e396f4cfa0fd564eacc280,PodSandboxId:c7d1cc7d31818ca0478b6677e099d47c97c3a741a700b5736556ee4f6cc1b903,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726386854262405301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb50f4eaa6a0ec69d898eb7021e96cef92373f04aaa72d80d142fe4672cb670,PodSandboxId:f06672ce3f73a51dcdb5bce707bdfa2fb1e8f07551c6383c35b260e20f1a24c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726386854218636780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 829e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673,PodSandboxId:8fb079d276072a6154176ab5d2df4c559987fb943fd42dfc2734560e2b50b584,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726386853992767093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2467889-efd8-41f5-b9e3-525b91273412 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.572943720Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d2b3442-7eb2-4fd1-acc4-c04e55c702a0 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.573068169Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d2b3442-7eb2-4fd1-acc4-c04e55c702a0 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.574703976Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c726d3db-3df5-4d1b-841e-40c3d1916bf3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.575326268Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386887575291975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c726d3db-3df5-4d1b-841e-40c3d1916bf3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.576003422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b20b2bf-09f9-4500-9833-bfff45686a21 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.576075310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b20b2bf-09f9-4500-9833-bfff45686a21 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:47 pause-742219 crio[2844]: time="2024-09-15 07:54:47.576406271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84d479066af5af926a4aeb0401dc7e5056c4fd69c1d662a69a97906bd9919f60,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726386870182026991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b6ffb5f912454523811e2320cdbb7d77288bc64b1c1c80657f6a6fb9413463,PodSandboxId:de97d3f4e9eede7dd5eda130b4661fa5a2f46f7de97d60efce9ffba779d17cb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726386870180628315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16900bb163bf9f90e59e4a54efab79083eef6ddec6cb8ad30253c1550eedb695,PodSandboxId:0b73dce7f0b583b8f34cc7db8a13594238905c651bcb19f5e2b329dfaa5015ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726386869729634759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b2e09ba379e801f73846d53108adbb0bf5d59c743d0902c03ffd8c545e15b5,PodSandboxId:f9786ebfdec93db96e99e4b720f9d5dda1a72ae4d0fdcc07aa02c441f1de0ec8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726386865543956155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
29e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d950374b796693fe04c61165104e91bbf9c5c6678cd45552d5a65bcdcea002e,PodSandboxId:7524801c19869d1870ad29e2dc6e569213e3dbd38c95df704df060db2490a0f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726386865488479537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d2112c6f8ec30f31a946723ccc1fead00e085f2d5834dc68efe59fcbe1ac9c,PodSandboxId:1ad16f05d91bb7c96b65b58d71149cab29c6fbbc0ff4eb2ef43ec0bc78e3533f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726386865273503295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726386856633697442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b,PodSandboxId:f5d18c6d14fce65ad0173db88f6dee2670b4f6d378e53885e79a032e0011a339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726386854158384470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97362824ba1f0684ad3030bd3e7876e33bb5506258c1f1057f0286118a5d5186,PodSandboxId:00856cdaaea9ff6fb0f0d6d9b7dfbdd2361625f8003052614f5efd36ff0cec8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726386854297825587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92741efd4a1309a3a9b55dab15f2dedda2a36961e396f4cfa0fd564eacc280,PodSandboxId:c7d1cc7d31818ca0478b6677e099d47c97c3a741a700b5736556ee4f6cc1b903,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726386854262405301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb50f4eaa6a0ec69d898eb7021e96cef92373f04aaa72d80d142fe4672cb670,PodSandboxId:f06672ce3f73a51dcdb5bce707bdfa2fb1e8f07551c6383c35b260e20f1a24c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726386854218636780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 829e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673,PodSandboxId:8fb079d276072a6154176ab5d2df4c559987fb943fd42dfc2734560e2b50b584,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726386853992767093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b20b2bf-09f9-4500-9833-bfff45686a21 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	84d479066af5a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 seconds ago      Running             coredns                   2                   45104aced3b14       coredns-7c65d6cfc9-8ngzs
	a0b6ffb5f9124       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   17 seconds ago      Running             kube-proxy                2                   de97d3f4e9eed       kube-proxy-8dd9x
	16900bb163bf9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   17 seconds ago      Running             kube-controller-manager   2                   0b73dce7f0b58       kube-controller-manager-pause-742219
	26b2e09ba379e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   22 seconds ago      Running             kube-scheduler            2                   f9786ebfdec93       kube-scheduler-pause-742219
	5d950374b7966       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   22 seconds ago      Running             etcd                      2                   7524801c19869       etcd-pause-742219
	b3d2112c6f8ec       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   22 seconds ago      Running             kube-apiserver            2                   1ad16f05d91bb       kube-apiserver-pause-742219
	dd16e21e72273       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   31 seconds ago      Exited              coredns                   1                   45104aced3b14       coredns-7c65d6cfc9-8ngzs
	97362824ba1f0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   33 seconds ago      Exited              kube-apiserver            1                   00856cdaaea9f       kube-apiserver-pause-742219
	8a92741efd4a1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   33 seconds ago      Exited              etcd                      1                   c7d1cc7d31818       etcd-pause-742219
	cdb50f4eaa6a0       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   33 seconds ago      Exited              kube-scheduler            1                   f06672ce3f73a       kube-scheduler-pause-742219
	e95202c6eff0b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   33 seconds ago      Exited              kube-proxy                1                   f5d18c6d14fce       kube-proxy-8dd9x
	676cee692affa       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   33 seconds ago      Exited              kube-controller-manager   1                   8fb079d276072       kube-controller-manager-pause-742219
	
	
	==> coredns [84d479066af5af926a4aeb0401dc7e5056c4fd69c1d662a69a97906bd9919f60] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32916 - 16060 "HINFO IN 7423366553452225556.3843291169860854247. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01448077s
	
	
	==> coredns [dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:59113 - 45040 "HINFO IN 1667818845255481148.1975024935033198385. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014178542s
	
	
	==> describe nodes <==
	Name:               pause-742219
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-742219
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=pause-742219
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T07_53_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:53:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-742219
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:54:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:54:29 +0000   Sun, 15 Sep 2024 07:53:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:54:29 +0000   Sun, 15 Sep 2024 07:53:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:54:29 +0000   Sun, 15 Sep 2024 07:53:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:54:29 +0000   Sun, 15 Sep 2024 07:53:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.43
	  Hostname:    pause-742219
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 934c3cc52e394b5689f4b55093133e57
	  System UUID:                934c3cc5-2e39-4b56-89f4-b55093133e57
	  Boot ID:                    377907be-253e-4a79-b331-ca93481f13ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-8ngzs                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     86s
	  kube-system                 etcd-pause-742219                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         91s
	  kube-system                 kube-apiserver-pause-742219             250m (12%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-controller-manager-pause-742219    200m (10%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-8dd9x                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-scheduler-pause-742219             100m (5%)     0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 85s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  91s (x2 over 91s)  kubelet          Node pause-742219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s (x2 over 91s)  kubelet          Node pause-742219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s (x2 over 91s)  kubelet          Node pause-742219 status is now: NodeHasSufficientPID
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeReady                90s                kubelet          Node pause-742219 status is now: NodeReady
	  Normal  RegisteredNode           87s                node-controller  Node pause-742219 event: Registered Node pause-742219 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x2 over 18s)  kubelet          Node pause-742219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x2 over 18s)  kubelet          Node pause-742219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x2 over 18s)  kubelet          Node pause-742219 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-742219 event: Registered Node pause-742219 in Controller
	
	
	==> dmesg <==
	[  +0.065324] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063114] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.212686] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.152066] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.306503] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[Sep15 07:53] systemd-fstab-generator[741]: Ignoring "noauto" option for root device
	[  +0.068941] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.546990] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.628635] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.960296] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[  +0.096462] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.788805] systemd-fstab-generator[1350]: Ignoring "noauto" option for root device
	[  +0.609036] kauditd_printk_skb: 43 callbacks suppressed
	[ +10.905697] kauditd_printk_skb: 64 callbacks suppressed
	[Sep15 07:54] systemd-fstab-generator[2232]: Ignoring "noauto" option for root device
	[  +0.136657] systemd-fstab-generator[2244]: Ignoring "noauto" option for root device
	[  +0.165105] systemd-fstab-generator[2258]: Ignoring "noauto" option for root device
	[  +0.137678] systemd-fstab-generator[2270]: Ignoring "noauto" option for root device
	[  +0.946976] systemd-fstab-generator[2644]: Ignoring "noauto" option for root device
	[  +1.144677] systemd-fstab-generator[3069]: Ignoring "noauto" option for root device
	[  +9.620877] kauditd_printk_skb: 248 callbacks suppressed
	[  +3.164514] systemd-fstab-generator[3717]: Ignoring "noauto" option for root device
	[  +1.852987] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.287374] systemd-fstab-generator[4020]: Ignoring "noauto" option for root device
	[  +0.100875] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [5d950374b796693fe04c61165104e91bbf9c5c6678cd45552d5a65bcdcea002e] <==
	{"level":"info","ts":"2024-09-15T07:54:25.790994Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-15T07:54:25.790967Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"19a433c09434770","initial-advertise-peer-urls":["https://192.168.72.43:2380"],"listen-peer-urls":["https://192.168.72.43:2380"],"advertise-client-urls":["https://192.168.72.43:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.43:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-15T07:54:25.791056Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-15T07:54:25.791170Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.43:2380"}
	{"level":"info","ts":"2024-09-15T07:54:25.791264Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.43:2380"}
	{"level":"info","ts":"2024-09-15T07:54:25.785434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19a433c09434770 switched to configuration voters=(115478665583871856)"}
	{"level":"info","ts":"2024-09-15T07:54:25.791550Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ed568b97e66db48","local-member-id":"19a433c09434770","added-peer-id":"19a433c09434770","added-peer-peer-urls":["https://192.168.72.43:2380"]}
	{"level":"info","ts":"2024-09-15T07:54:25.791753Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ed568b97e66db48","local-member-id":"19a433c09434770","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T07:54:25.791914Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T07:54:27.057801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19a433c09434770 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-15T07:54:27.057926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19a433c09434770 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-15T07:54:27.057963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19a433c09434770 received MsgPreVoteResp from 19a433c09434770 at term 2"}
	{"level":"info","ts":"2024-09-15T07:54:27.057979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19a433c09434770 became candidate at term 3"}
	{"level":"info","ts":"2024-09-15T07:54:27.057985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19a433c09434770 received MsgVoteResp from 19a433c09434770 at term 3"}
	{"level":"info","ts":"2024-09-15T07:54:27.057993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19a433c09434770 became leader at term 3"}
	{"level":"info","ts":"2024-09-15T07:54:27.058000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 19a433c09434770 elected leader 19a433c09434770 at term 3"}
	{"level":"info","ts":"2024-09-15T07:54:27.061146Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"19a433c09434770","local-member-attributes":"{Name:pause-742219 ClientURLs:[https://192.168.72.43:2379]}","request-path":"/0/members/19a433c09434770/attributes","cluster-id":"ed568b97e66db48","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T07:54:27.061149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T07:54:27.061259Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T07:54:27.061738Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T07:54:27.061755Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T07:54:27.062557Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T07:54:27.063316Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T07:54:27.071313Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T07:54:27.072223Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.43:2379"}
	
	
	==> etcd [8a92741efd4a1309a3a9b55dab15f2dedda2a36961e396f4cfa0fd564eacc280] <==
	
	
	==> kernel <==
	 07:54:47 up 2 min,  0 users,  load average: 0.70, 0.26, 0.09
	Linux pause-742219 5.10.207 #1 SMP Sun Sep 15 04:48:27 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [97362824ba1f0684ad3030bd3e7876e33bb5506258c1f1057f0286118a5d5186] <==
	
	
	==> kube-apiserver [b3d2112c6f8ec30f31a946723ccc1fead00e085f2d5834dc68efe59fcbe1ac9c] <==
	I0915 07:54:29.235989       1 shared_informer.go:320] Caches are synced for configmaps
	I0915 07:54:29.238236       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0915 07:54:29.251552       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0915 07:54:29.251605       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0915 07:54:29.254270       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0915 07:54:29.254307       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0915 07:54:29.254395       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0915 07:54:29.256593       1 aggregator.go:171] initial CRD sync complete...
	I0915 07:54:29.256637       1 autoregister_controller.go:144] Starting autoregister controller
	I0915 07:54:29.256644       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0915 07:54:29.256649       1 cache.go:39] Caches are synced for autoregister controller
	I0915 07:54:29.285646       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0915 07:54:29.297992       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0915 07:54:29.306426       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0915 07:54:29.306471       1 policy_source.go:224] refreshing policies
	I0915 07:54:29.334671       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0915 07:54:29.392891       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 07:54:30.187682       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0915 07:54:31.348268       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0915 07:54:31.367704       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0915 07:54:31.431118       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0915 07:54:31.472565       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0915 07:54:31.485479       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0915 07:54:33.274674       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0915 07:54:33.378460       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [16900bb163bf9f90e59e4a54efab79083eef6ddec6cb8ad30253c1550eedb695] <==
	I0915 07:54:32.967420       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0915 07:54:32.967470       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0915 07:54:32.967504       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0915 07:54:32.967953       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0915 07:54:32.968023       1 shared_informer.go:320] Caches are synced for daemon sets
	I0915 07:54:32.967969       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-742219"
	I0915 07:54:32.968103       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0915 07:54:32.973734       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0915 07:54:32.975760       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0915 07:54:32.978718       1 shared_informer.go:320] Caches are synced for attach detach
	I0915 07:54:32.981410       1 shared_informer.go:320] Caches are synced for expand
	I0915 07:54:32.984917       1 shared_informer.go:320] Caches are synced for endpoint
	I0915 07:54:33.004362       1 shared_informer.go:320] Caches are synced for persistent volume
	I0915 07:54:33.037679       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0915 07:54:33.042059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="123.383576ms"
	I0915 07:54:33.042618       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.84µs"
	I0915 07:54:33.068227       1 shared_informer.go:320] Caches are synced for crt configmap
	I0915 07:54:33.166849       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 07:54:33.168354       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0915 07:54:33.177124       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 07:54:33.340829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="11.32744ms"
	I0915 07:54:33.340986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.983µs"
	I0915 07:54:33.609960       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 07:54:33.621645       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 07:54:33.621706       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673] <==
	
	
	==> kube-proxy [a0b6ffb5f912454523811e2320cdbb7d77288bc64b1c1c80657f6a6fb9413463] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 07:54:30.534163       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 07:54:30.552147       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.43"]
	E0915 07:54:30.552239       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 07:54:30.628982       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 07:54:30.629089       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 07:54:30.629140       1 server_linux.go:169] "Using iptables Proxier"
	I0915 07:54:30.636208       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 07:54:30.636737       1 server.go:483] "Version info" version="v1.31.1"
	I0915 07:54:30.638930       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:54:30.641250       1 config.go:199] "Starting service config controller"
	I0915 07:54:30.641366       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 07:54:30.641485       1 config.go:105] "Starting endpoint slice config controller"
	I0915 07:54:30.641582       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 07:54:30.642237       1 config.go:328] "Starting node config controller"
	I0915 07:54:30.642309       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 07:54:30.742583       1 shared_informer.go:320] Caches are synced for node config
	I0915 07:54:30.742944       1 shared_informer.go:320] Caches are synced for service config
	I0915 07:54:30.742986       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b] <==
	
	
	==> kube-scheduler [26b2e09ba379e801f73846d53108adbb0bf5d59c743d0902c03ffd8c545e15b5] <==
	I0915 07:54:26.144941       1 serving.go:386] Generated self-signed cert in-memory
	W0915 07:54:29.183320       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0915 07:54:29.183425       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0915 07:54:29.183436       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0915 07:54:29.183534       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0915 07:54:29.289701       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0915 07:54:29.289745       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:54:29.296108       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0915 07:54:29.296273       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 07:54:29.296313       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 07:54:29.296747       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 07:54:29.397600       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cdb50f4eaa6a0ec69d898eb7021e96cef92373f04aaa72d80d142fe4672cb670] <==
	
	
	==> kubelet <==
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.397177    3724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4502dde6e4b482035021a9efb10a323-usr-share-ca-certificates\") pod \"kube-apiserver-pause-742219\" (UID: \"f4502dde6e4b482035021a9efb10a323\") " pod="kube-system/kube-apiserver-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.397293    3724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1585d66306ff8a19870a02080fd586ce-ca-certs\") pod \"kube-controller-manager-pause-742219\" (UID: \"1585d66306ff8a19870a02080fd586ce\") " pod="kube-system/kube-controller-manager-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.397415    3724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1585d66306ff8a19870a02080fd586ce-k8s-certs\") pod \"kube-controller-manager-pause-742219\" (UID: \"1585d66306ff8a19870a02080fd586ce\") " pod="kube-system/kube-controller-manager-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.397529    3724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1585d66306ff8a19870a02080fd586ce-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-742219\" (UID: \"1585d66306ff8a19870a02080fd586ce\") " pod="kube-system/kube-controller-manager-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.397657    3724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/829e379fddfad6d8892a76796f0aafae-kubeconfig\") pod \"kube-scheduler-pause-742219\" (UID: \"829e379fddfad6d8892a76796f0aafae\") " pod="kube-system/kube-scheduler-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.397771    3724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/3ac30683a8c8a05a895d1c63585eb16e-etcd-certs\") pod \"etcd-pause-742219\" (UID: \"3ac30683a8c8a05a895d1c63585eb16e\") " pod="kube-system/etcd-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: E0915 07:54:29.414317    3724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-742219\" already exists" pod="kube-system/kube-scheduler-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: E0915 07:54:29.414717    3724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-742219\" already exists" pod="kube-system/kube-apiserver-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: E0915 07:54:29.414885    3724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-pause-742219\" already exists" pod="kube-system/etcd-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: E0915 07:54:29.415346    3724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-742219\" already exists" pod="kube-system/kube-controller-manager-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.418461    3724 kubelet_node_status.go:111] "Node was previously registered" node="pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.418550    3724 kubelet_node_status.go:75] "Successfully registered node" node="pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.418588    3724 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.420090    3724 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.715556    3724 scope.go:117] "RemoveContainer" containerID="676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.815754    3724 apiserver.go:52] "Watching apiserver"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.862978    3724 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.900983    3724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a4779f5-5b4b-42d4-919a-025dcc1b52a5-lib-modules\") pod \"kube-proxy-8dd9x\" (UID: \"6a4779f5-5b4b-42d4-919a-025dcc1b52a5\") " pod="kube-system/kube-proxy-8dd9x"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.901633    3724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a4779f5-5b4b-42d4-919a-025dcc1b52a5-xtables-lock\") pod \"kube-proxy-8dd9x\" (UID: \"6a4779f5-5b4b-42d4-919a-025dcc1b52a5\") " pod="kube-system/kube-proxy-8dd9x"
	Sep 15 07:54:30 pause-742219 kubelet[3724]: I0915 07:54:30.121246    3724 scope.go:117] "RemoveContainer" containerID="dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85"
	Sep 15 07:54:30 pause-742219 kubelet[3724]: I0915 07:54:30.122354    3724 scope.go:117] "RemoveContainer" containerID="e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b"
	Sep 15 07:54:30 pause-742219 kubelet[3724]: E0915 07:54:30.162147    3724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-742219\" already exists" pod="kube-system/kube-apiserver-pause-742219"
	Sep 15 07:54:33 pause-742219 kubelet[3724]: I0915 07:54:33.306943    3724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 15 07:54:39 pause-742219 kubelet[3724]: E0915 07:54:39.098846    3724 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386879098534501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:54:39 pause-742219 kubelet[3724]: E0915 07:54:39.099317    3724 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386879098534501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-742219 -n pause-742219
helpers_test.go:261: (dbg) Run:  kubectl --context pause-742219 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-742219 -n pause-742219
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-742219 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-742219 logs -n 25: (1.380714358s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p test-preload-514007         | test-preload-514007       | jenkins | v1.34.0 | 15 Sep 24 07:47 UTC |                     |
	| delete  | -p test-preload-514007         | test-preload-514007       | jenkins | v1.34.0 | 15 Sep 24 07:49 UTC | 15 Sep 24 07:49 UTC |
	| start   | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:49 UTC | 15 Sep 24 07:50 UTC |
	|         | --memory=2048 --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC | 15 Sep 24 07:50 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:50 UTC | 15 Sep 24 07:50 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-660162       | scheduled-stop-660162     | jenkins | v1.34.0 | 15 Sep 24 07:51 UTC | 15 Sep 24 07:51 UTC |
	| start   | -p force-systemd-env-756859    | force-systemd-env-756859  | jenkins | v1.34.0 | 15 Sep 24 07:51 UTC | 15 Sep 24 07:52 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p cert-expiration-773617      | cert-expiration-773617    | jenkins | v1.34.0 | 15 Sep 24 07:51 UTC | 15 Sep 24 07:52 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --cert-expiration=3m           |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-742219 --memory=2048  | pause-742219              | jenkins | v1.34.0 | 15 Sep 24 07:51 UTC | 15 Sep 24 07:54 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-727172         | offline-crio-727172       | jenkins | v1.34.0 | 15 Sep 24 07:51 UTC | 15 Sep 24 07:53 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-756859    | force-systemd-env-756859  | jenkins | v1.34.0 | 15 Sep 24 07:52 UTC | 15 Sep 24 07:52 UTC |
	| start   | -p kubernetes-upgrade-669362   | kubernetes-upgrade-669362 | jenkins | v1.34.0 | 15 Sep 24 07:52 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-727172         | offline-crio-727172       | jenkins | v1.34.0 | 15 Sep 24 07:53 UTC | 15 Sep 24 07:53 UTC |
	| start   | -p stopped-upgrade-112030      | minikube                  | jenkins | v1.26.0 | 15 Sep 24 07:53 UTC | 15 Sep 24 07:54 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p pause-742219                | pause-742219              | jenkins | v1.34.0 | 15 Sep 24 07:54 UTC | 15 Sep 24 07:54 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-112030 stop    | minikube                  | jenkins | v1.26.0 | 15 Sep 24 07:54 UTC | 15 Sep 24 07:54 UTC |
	| start   | -p stopped-upgrade-112030      | stopped-upgrade-112030    | jenkins | v1.34.0 | 15 Sep 24 07:54 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 07:54:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 07:54:28.843810   53205 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:54:28.844152   53205 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:54:28.844167   53205 out.go:358] Setting ErrFile to fd 2...
	I0915 07:54:28.844176   53205 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:54:28.844447   53205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:54:28.845290   53205 out.go:352] Setting JSON to false
	I0915 07:54:28.846538   53205 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5815,"bootTime":1726381054,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 07:54:28.846665   53205 start.go:139] virtualization: kvm guest
	I0915 07:54:28.849191   53205 out.go:177] * [stopped-upgrade-112030] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 07:54:28.850858   53205 notify.go:220] Checking for updates...
	I0915 07:54:28.850878   53205 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:54:28.852304   53205 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:54:28.853744   53205 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:54:28.855189   53205 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 07:54:28.856602   53205 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 07:54:28.858012   53205 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:54:28.860105   53205 config.go:182] Loaded profile config "stopped-upgrade-112030": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0915 07:54:28.860704   53205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:54:28.860756   53205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:54:28.877567   53205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46165
	I0915 07:54:28.878089   53205 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:54:28.878671   53205 main.go:141] libmachine: Using API Version  1
	I0915 07:54:28.878716   53205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:54:28.879107   53205 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:54:28.879287   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .DriverName
	I0915 07:54:28.881306   53205 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0915 07:54:28.882812   53205 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:54:28.883125   53205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:54:28.883173   53205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:54:28.897922   53205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43457
	I0915 07:54:28.898454   53205 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:54:28.898941   53205 main.go:141] libmachine: Using API Version  1
	I0915 07:54:28.898961   53205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:54:28.899308   53205 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:54:28.899519   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .DriverName
	I0915 07:54:28.938478   53205 out.go:177] * Using the kvm2 driver based on existing profile
	I0915 07:54:28.939924   53205 start.go:297] selected driver: kvm2
	I0915 07:54:28.939939   53205 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-112030 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-112
030 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.194 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0915 07:54:28.940040   53205 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:54:28.940701   53205 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:54:28.940773   53205 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 07:54:28.957435   53205 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 07:54:28.957924   53205 cni.go:84] Creating CNI manager for ""
	I0915 07:54:28.957991   53205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 07:54:28.958066   53205 start.go:340] cluster config:
	{Name:stopped-upgrade-112030 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-112030 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.194 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0915 07:54:28.958201   53205 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:54:28.960147   53205 out.go:177] * Starting "stopped-upgrade-112030" primary control-plane node in "stopped-upgrade-112030" cluster
	I0915 07:54:28.961670   53205 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0915 07:54:28.961729   53205 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0915 07:54:28.961740   53205 cache.go:56] Caching tarball of preloaded images
	I0915 07:54:28.961870   53205 preload.go:172] Found /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 07:54:28.961884   53205 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0915 07:54:28.962003   53205 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/stopped-upgrade-112030/config.json ...
	I0915 07:54:28.962230   53205 start.go:360] acquireMachinesLock for stopped-upgrade-112030: {Name:mk0e24fdb2bdb4f5f97e5617bdb9c7d9e06aacd2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 07:54:28.962289   53205 start.go:364] duration metric: took 34.525µs to acquireMachinesLock for "stopped-upgrade-112030"
	I0915 07:54:28.962310   53205 start.go:96] Skipping create...Using existing machine configuration
	I0915 07:54:28.962319   53205 fix.go:54] fixHost starting: 
	I0915 07:54:28.962727   53205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:54:28.962762   53205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:54:28.978308   53205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35677
	I0915 07:54:28.978847   53205 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:54:28.979413   53205 main.go:141] libmachine: Using API Version  1
	I0915 07:54:28.979440   53205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:54:28.979752   53205 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:54:28.979961   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .DriverName
	I0915 07:54:28.980112   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetState
	I0915 07:54:28.981707   53205 fix.go:112] recreateIfNeeded on stopped-upgrade-112030: state=Stopped err=<nil>
	I0915 07:54:28.981755   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .DriverName
	W0915 07:54:28.981944   53205 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 07:54:28.983882   53205 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-112030" ...
	I0915 07:54:26.913031   52982 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85 97362824ba1f0684ad3030bd3e7876e33bb5506258c1f1057f0286118a5d5186 8a92741efd4a1309a3a9b55dab15f2dedda2a36961e396f4cfa0fd564eacc280 cdb50f4eaa6a0ec69d898eb7021e96cef92373f04aaa72d80d142fe4672cb670 e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b 676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673 2c4bd4e283c76199843eb91ba57d5c2bccdd4277de781d655f74098aa5ba7bbc: (9.774508232s)
	I0915 07:54:26.913138   52982 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0915 07:54:26.960013   52982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 07:54:26.971488   52982 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Sep 15 07:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Sep 15 07:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Sep 15 07:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Sep 15 07:53 /etc/kubernetes/scheduler.conf
	
	I0915 07:54:26.971559   52982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 07:54:26.982230   52982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 07:54:26.991944   52982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 07:54:27.006142   52982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:54:27.006221   52982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 07:54:27.016916   52982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 07:54:27.030263   52982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:54:27.030337   52982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 07:54:27.040246   52982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 07:54:27.050121   52982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 07:54:27.158579   52982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 07:54:28.281672   52982 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.123056746s)
	I0915 07:54:28.281709   52982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0915 07:54:28.636723   52982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 07:54:28.821934   52982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0915 07:54:29.074586   52982 api_server.go:52] waiting for apiserver process to appear ...
	I0915 07:54:29.074699   52982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:54:29.128830   52982 api_server.go:72] duration metric: took 54.241279ms to wait for apiserver process to appear ...
	I0915 07:54:29.128862   52982 api_server.go:88] waiting for apiserver healthz status ...
	I0915 07:54:29.128889   52982 api_server.go:253] Checking apiserver healthz at https://192.168.72.43:8443/healthz ...
	I0915 07:54:29.259934   52982 api_server.go:279] https://192.168.72.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0915 07:54:29.259962   52982 api_server.go:103] status: https://192.168.72.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0915 07:54:29.629564   52982 api_server.go:253] Checking apiserver healthz at https://192.168.72.43:8443/healthz ...
	I0915 07:54:29.634769   52982 api_server.go:279] https://192.168.72.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0915 07:54:29.634799   52982 api_server.go:103] status: https://192.168.72.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0915 07:54:30.129020   52982 api_server.go:253] Checking apiserver healthz at https://192.168.72.43:8443/healthz ...
	I0915 07:54:30.162790   52982 api_server.go:279] https://192.168.72.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0915 07:54:30.162847   52982 api_server.go:103] status: https://192.168.72.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0915 07:54:30.629891   52982 api_server.go:253] Checking apiserver healthz at https://192.168.72.43:8443/healthz ...
	I0915 07:54:30.643529   52982 api_server.go:279] https://192.168.72.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0915 07:54:30.643576   52982 api_server.go:103] status: https://192.168.72.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0915 07:54:31.129935   52982 api_server.go:253] Checking apiserver healthz at https://192.168.72.43:8443/healthz ...
	I0915 07:54:31.148255   52982 api_server.go:279] https://192.168.72.43:8443/healthz returned 200:
	ok
	I0915 07:54:31.169695   52982 api_server.go:141] control plane version: v1.31.1
	I0915 07:54:31.169731   52982 api_server.go:131] duration metric: took 2.040860538s to wait for apiserver health ...
	I0915 07:54:31.169742   52982 cni.go:84] Creating CNI manager for ""
	I0915 07:54:31.169750   52982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 07:54:31.171584   52982 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 07:54:28.985643   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .Start
	I0915 07:54:28.985831   53205 main.go:141] libmachine: (stopped-upgrade-112030) Ensuring networks are active...
	I0915 07:54:28.986692   53205 main.go:141] libmachine: (stopped-upgrade-112030) Ensuring network default is active
	I0915 07:54:28.987025   53205 main.go:141] libmachine: (stopped-upgrade-112030) Ensuring network mk-stopped-upgrade-112030 is active
	I0915 07:54:28.987477   53205 main.go:141] libmachine: (stopped-upgrade-112030) Getting domain xml...
	I0915 07:54:28.988280   53205 main.go:141] libmachine: (stopped-upgrade-112030) Creating domain...
	I0915 07:54:30.282380   53205 main.go:141] libmachine: (stopped-upgrade-112030) Waiting to get IP...
	I0915 07:54:30.283113   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:30.283647   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:30.283734   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:30.283636   53239 retry.go:31] will retry after 238.646293ms: waiting for machine to come up
	I0915 07:54:30.524042   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:30.524717   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:30.524745   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:30.524665   53239 retry.go:31] will retry after 253.231975ms: waiting for machine to come up
	I0915 07:54:30.780267   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:30.780804   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:30.780831   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:30.780750   53239 retry.go:31] will retry after 380.943272ms: waiting for machine to come up
	I0915 07:54:31.163240   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:31.163716   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:31.163742   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:31.163687   53239 retry.go:31] will retry after 558.074606ms: waiting for machine to come up
	I0915 07:54:31.722807   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:31.723256   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:31.723282   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:31.723209   53239 retry.go:31] will retry after 655.819426ms: waiting for machine to come up
	I0915 07:54:32.381273   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:32.381856   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:32.381879   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:32.381780   53239 retry.go:31] will retry after 759.800455ms: waiting for machine to come up
	I0915 07:54:33.143298   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:33.143768   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:33.143785   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:33.143737   53239 retry.go:31] will retry after 748.789844ms: waiting for machine to come up
	I0915 07:54:32.699466   52215 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0915 07:54:32.699683   52215 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0915 07:54:31.172993   52982 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 07:54:31.187690   52982 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 07:54:31.217948   52982 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 07:54:31.218036   52982 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0915 07:54:31.218055   52982 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0915 07:54:31.233832   52982 system_pods.go:59] 6 kube-system pods found
	I0915 07:54:31.233872   52982 system_pods.go:61] "coredns-7c65d6cfc9-8ngzs" [4ec3a715-c19d-4fe0-88e5-d2b36fc56640] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0915 07:54:31.233883   52982 system_pods.go:61] "etcd-pause-742219" [cc120ac0-39e0-4dcd-93d1-6c6547740b5d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0915 07:54:31.233892   52982 system_pods.go:61] "kube-apiserver-pause-742219" [40f4e787-0eb1-4f72-926d-8913fa27752f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0915 07:54:31.233908   52982 system_pods.go:61] "kube-controller-manager-pause-742219" [7deb3134-b61a-4b78-a29c-5b2d7e76cdce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0915 07:54:31.233917   52982 system_pods.go:61] "kube-proxy-8dd9x" [6a4779f5-5b4b-42d4-919a-025dcc1b52a5] Running
	I0915 07:54:31.233929   52982 system_pods.go:61] "kube-scheduler-pause-742219" [ca2cff60-799b-44db-8f00-99fbfd1b1caa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0915 07:54:31.233950   52982 system_pods.go:74] duration metric: took 15.974139ms to wait for pod list to return data ...
	I0915 07:54:31.233996   52982 node_conditions.go:102] verifying NodePressure condition ...
	I0915 07:54:31.239807   52982 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 07:54:31.239833   52982 node_conditions.go:123] node cpu capacity is 2
	I0915 07:54:31.239842   52982 node_conditions.go:105] duration metric: took 5.842012ms to run NodePressure ...
	I0915 07:54:31.239857   52982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 07:54:31.516949   52982 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0915 07:54:31.521312   52982 kubeadm.go:739] kubelet initialised
	I0915 07:54:31.521339   52982 kubeadm.go:740] duration metric: took 4.360349ms waiting for restarted kubelet to initialise ...
	I0915 07:54:31.521349   52982 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:54:31.526002   52982 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8ngzs" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:33.536362   52982 pod_ready.go:93] pod "coredns-7c65d6cfc9-8ngzs" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:33.536385   52982 pod_ready.go:82] duration metric: took 2.010359533s for pod "coredns-7c65d6cfc9-8ngzs" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:33.536394   52982 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:33.893647   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:33.894074   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:33.894097   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:33.894040   53239 retry.go:31] will retry after 902.669448ms: waiting for machine to come up
	I0915 07:54:34.798020   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:34.798455   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:34.798493   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:34.798415   53239 retry.go:31] will retry after 1.617204726s: waiting for machine to come up
	I0915 07:54:36.417677   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:36.418224   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:36.418243   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:36.418192   53239 retry.go:31] will retry after 2.105966246s: waiting for machine to come up
	I0915 07:54:38.526641   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:38.527180   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:38.527208   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:38.527136   53239 retry.go:31] will retry after 2.700403276s: waiting for machine to come up
	I0915 07:54:35.542830   52982 pod_ready.go:103] pod "etcd-pause-742219" in "kube-system" namespace has status "Ready":"False"
	I0915 07:54:37.543810   52982 pod_ready.go:103] pod "etcd-pause-742219" in "kube-system" namespace has status "Ready":"False"
	I0915 07:54:40.044074   52982 pod_ready.go:103] pod "etcd-pause-742219" in "kube-system" namespace has status "Ready":"False"
	I0915 07:54:41.043416   52982 pod_ready.go:93] pod "etcd-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:41.043452   52982 pod_ready.go:82] duration metric: took 7.507051455s for pod "etcd-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:41.043465   52982 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.050531   52982 pod_ready.go:103] pod "kube-apiserver-pause-742219" in "kube-system" namespace has status "Ready":"False"
	I0915 07:54:43.549261   52982 pod_ready.go:93] pod "kube-apiserver-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:43.549281   52982 pod_ready.go:82] duration metric: took 2.505809623s for pod "kube-apiserver-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.549291   52982 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.554425   52982 pod_ready.go:93] pod "kube-controller-manager-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:43.554447   52982 pod_ready.go:82] duration metric: took 5.147958ms for pod "kube-controller-manager-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.554458   52982 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8dd9x" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.559138   52982 pod_ready.go:93] pod "kube-proxy-8dd9x" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:43.559158   52982 pod_ready.go:82] duration metric: took 4.691838ms for pod "kube-proxy-8dd9x" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.559168   52982 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.565697   52982 pod_ready.go:93] pod "kube-scheduler-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:43.565736   52982 pod_ready.go:82] duration metric: took 6.544377ms for pod "kube-scheduler-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.565746   52982 pod_ready.go:39] duration metric: took 12.044385721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:54:43.565767   52982 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 07:54:43.578359   52982 ops.go:34] apiserver oom_adj: -16
	I0915 07:54:43.578378   52982 kubeadm.go:597] duration metric: took 26.561998369s to restartPrimaryControlPlane
	I0915 07:54:43.578388   52982 kubeadm.go:394] duration metric: took 26.660645722s to StartCluster
	I0915 07:54:43.578406   52982 settings.go:142] acquiring lock: {Name:mkf5235d72fa0db4ee272126c244284fe5de298a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:54:43.578495   52982 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 07:54:43.579329   52982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/kubeconfig: {Name:mk08ce8dc701ab8f3c73b1f6ae730da0fbf561bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:54:43.579538   52982 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:54:43.579606   52982 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0915 07:54:43.579795   52982 config.go:182] Loaded profile config "pause-742219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:54:43.582150   52982 out.go:177] * Verifying Kubernetes components...
	I0915 07:54:43.582150   52982 out.go:177] * Enabled addons: 
	I0915 07:54:41.228784   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:41.229305   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:41.229335   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:41.229251   53239 retry.go:31] will retry after 2.30208194s: waiting for machine to come up
	I0915 07:54:43.532679   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:43.533255   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | unable to find current IP address of domain stopped-upgrade-112030 in network mk-stopped-upgrade-112030
	I0915 07:54:43.533298   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | I0915 07:54:43.533210   53239 retry.go:31] will retry after 4.528304537s: waiting for machine to come up
	I0915 07:54:43.584015   52982 addons.go:510] duration metric: took 4.414093ms for enable addons: enabled=[]
	I0915 07:54:43.584045   52982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:54:43.748895   52982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:54:43.765270   52982 node_ready.go:35] waiting up to 6m0s for node "pause-742219" to be "Ready" ...
	I0915 07:54:43.768569   52982 node_ready.go:49] node "pause-742219" has status "Ready":"True"
	I0915 07:54:43.768591   52982 node_ready.go:38] duration metric: took 3.282171ms for node "pause-742219" to be "Ready" ...
	I0915 07:54:43.768600   52982 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:54:43.773460   52982 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8ngzs" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.948167   52982 pod_ready.go:93] pod "coredns-7c65d6cfc9-8ngzs" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:43.948198   52982 pod_ready.go:82] duration metric: took 174.713586ms for pod "coredns-7c65d6cfc9-8ngzs" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:43.948212   52982 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:44.347157   52982 pod_ready.go:93] pod "etcd-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:44.347194   52982 pod_ready.go:82] duration metric: took 398.962807ms for pod "etcd-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:44.347207   52982 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:44.747540   52982 pod_ready.go:93] pod "kube-apiserver-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:44.747568   52982 pod_ready.go:82] duration metric: took 400.35265ms for pod "kube-apiserver-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:44.747582   52982 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:45.147517   52982 pod_ready.go:93] pod "kube-controller-manager-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:45.147541   52982 pod_ready.go:82] duration metric: took 399.951602ms for pod "kube-controller-manager-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:45.147553   52982 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8dd9x" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:45.546718   52982 pod_ready.go:93] pod "kube-proxy-8dd9x" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:45.546741   52982 pod_ready.go:82] duration metric: took 399.18215ms for pod "kube-proxy-8dd9x" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:45.546751   52982 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:45.947380   52982 pod_ready.go:93] pod "kube-scheduler-pause-742219" in "kube-system" namespace has status "Ready":"True"
	I0915 07:54:45.947403   52982 pod_ready.go:82] duration metric: took 400.635945ms for pod "kube-scheduler-pause-742219" in "kube-system" namespace to be "Ready" ...
	I0915 07:54:45.947414   52982 pod_ready.go:39] duration metric: took 2.178804286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:54:45.947430   52982 api_server.go:52] waiting for apiserver process to appear ...
	I0915 07:54:45.947487   52982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:54:45.961677   52982 api_server.go:72] duration metric: took 2.382110261s to wait for apiserver process to appear ...
	I0915 07:54:45.961703   52982 api_server.go:88] waiting for apiserver healthz status ...
	I0915 07:54:45.961719   52982 api_server.go:253] Checking apiserver healthz at https://192.168.72.43:8443/healthz ...
	I0915 07:54:45.967135   52982 api_server.go:279] https://192.168.72.43:8443/healthz returned 200:
	ok
	I0915 07:54:45.968088   52982 api_server.go:141] control plane version: v1.31.1
	I0915 07:54:45.968111   52982 api_server.go:131] duration metric: took 6.401949ms to wait for apiserver health ...
	I0915 07:54:45.968119   52982 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 07:54:46.150099   52982 system_pods.go:59] 6 kube-system pods found
	I0915 07:54:46.150129   52982 system_pods.go:61] "coredns-7c65d6cfc9-8ngzs" [4ec3a715-c19d-4fe0-88e5-d2b36fc56640] Running
	I0915 07:54:46.150136   52982 system_pods.go:61] "etcd-pause-742219" [cc120ac0-39e0-4dcd-93d1-6c6547740b5d] Running
	I0915 07:54:46.150140   52982 system_pods.go:61] "kube-apiserver-pause-742219" [40f4e787-0eb1-4f72-926d-8913fa27752f] Running
	I0915 07:54:46.150144   52982 system_pods.go:61] "kube-controller-manager-pause-742219" [7deb3134-b61a-4b78-a29c-5b2d7e76cdce] Running
	I0915 07:54:46.150147   52982 system_pods.go:61] "kube-proxy-8dd9x" [6a4779f5-5b4b-42d4-919a-025dcc1b52a5] Running
	I0915 07:54:46.150151   52982 system_pods.go:61] "kube-scheduler-pause-742219" [ca2cff60-799b-44db-8f00-99fbfd1b1caa] Running
	I0915 07:54:46.150158   52982 system_pods.go:74] duration metric: took 182.033731ms to wait for pod list to return data ...
	I0915 07:54:46.150167   52982 default_sa.go:34] waiting for default service account to be created ...
	I0915 07:54:46.348082   52982 default_sa.go:45] found service account: "default"
	I0915 07:54:46.348109   52982 default_sa.go:55] duration metric: took 197.935631ms for default service account to be created ...
	I0915 07:54:46.348119   52982 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 07:54:46.549298   52982 system_pods.go:86] 6 kube-system pods found
	I0915 07:54:46.549328   52982 system_pods.go:89] "coredns-7c65d6cfc9-8ngzs" [4ec3a715-c19d-4fe0-88e5-d2b36fc56640] Running
	I0915 07:54:46.549336   52982 system_pods.go:89] "etcd-pause-742219" [cc120ac0-39e0-4dcd-93d1-6c6547740b5d] Running
	I0915 07:54:46.549345   52982 system_pods.go:89] "kube-apiserver-pause-742219" [40f4e787-0eb1-4f72-926d-8913fa27752f] Running
	I0915 07:54:46.549350   52982 system_pods.go:89] "kube-controller-manager-pause-742219" [7deb3134-b61a-4b78-a29c-5b2d7e76cdce] Running
	I0915 07:54:46.549355   52982 system_pods.go:89] "kube-proxy-8dd9x" [6a4779f5-5b4b-42d4-919a-025dcc1b52a5] Running
	I0915 07:54:46.549360   52982 system_pods.go:89] "kube-scheduler-pause-742219" [ca2cff60-799b-44db-8f00-99fbfd1b1caa] Running
	I0915 07:54:46.549368   52982 system_pods.go:126] duration metric: took 201.243199ms to wait for k8s-apps to be running ...
	I0915 07:54:46.549377   52982 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 07:54:46.549427   52982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:54:46.563995   52982 system_svc.go:56] duration metric: took 14.612396ms WaitForService to wait for kubelet
	I0915 07:54:46.564025   52982 kubeadm.go:582] duration metric: took 2.984463974s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:54:46.564046   52982 node_conditions.go:102] verifying NodePressure condition ...
	I0915 07:54:46.748043   52982 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 07:54:46.748085   52982 node_conditions.go:123] node cpu capacity is 2
	I0915 07:54:46.748098   52982 node_conditions.go:105] duration metric: took 184.045523ms to run NodePressure ...
	I0915 07:54:46.748109   52982 start.go:241] waiting for startup goroutines ...
	I0915 07:54:46.748118   52982 start.go:246] waiting for cluster config update ...
	I0915 07:54:46.748128   52982 start.go:255] writing updated cluster config ...
	I0915 07:54:46.748468   52982 ssh_runner.go:195] Run: rm -f paused
	I0915 07:54:46.794117   52982 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 07:54:46.796205   52982 out.go:177] * Done! kubectl is now configured to use "pause-742219" cluster and "default" namespace by default
	I0915 07:54:48.066173   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.066663   53205 main.go:141] libmachine: (stopped-upgrade-112030) Found IP for machine: 192.168.50.194
	I0915 07:54:48.066701   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has current primary IP address 192.168.50.194 and MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.066709   53205 main.go:141] libmachine: (stopped-upgrade-112030) Reserving static IP address...
	I0915 07:54:48.067240   53205 main.go:141] libmachine: (stopped-upgrade-112030) Reserved static IP address: 192.168.50.194
	I0915 07:54:48.067271   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | found host DHCP lease matching {name: "stopped-upgrade-112030", mac: "52:54:00:b9:18:f1", ip: "192.168.50.194"} in network mk-stopped-upgrade-112030: {Iface:virbr2 ExpiryTime:2024-09-15 08:54:38 +0000 UTC Type:0 Mac:52:54:00:b9:18:f1 Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:stopped-upgrade-112030 Clientid:01:52:54:00:b9:18:f1}
	I0915 07:54:48.067282   53205 main.go:141] libmachine: (stopped-upgrade-112030) Waiting for SSH to be available...
	I0915 07:54:48.067349   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | skip adding static IP to network mk-stopped-upgrade-112030 - found existing host DHCP lease matching {name: "stopped-upgrade-112030", mac: "52:54:00:b9:18:f1", ip: "192.168.50.194"}
	I0915 07:54:48.067374   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | Getting to WaitForSSH function...
	I0915 07:54:48.069716   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.070114   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:18:f1", ip: ""} in network mk-stopped-upgrade-112030: {Iface:virbr2 ExpiryTime:2024-09-15 08:54:38 +0000 UTC Type:0 Mac:52:54:00:b9:18:f1 Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:stopped-upgrade-112030 Clientid:01:52:54:00:b9:18:f1}
	I0915 07:54:48.070142   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined IP address 192.168.50.194 and MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.070254   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | Using SSH client type: external
	I0915 07:54:48.070273   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | Using SSH private key: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/stopped-upgrade-112030/id_rsa (-rw-------)
	I0915 07:54:48.070338   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19644-6166/.minikube/machines/stopped-upgrade-112030/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0915 07:54:48.070362   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | About to run SSH command:
	I0915 07:54:48.070374   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | exit 0
	I0915 07:54:48.166331   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | SSH cmd err, output: <nil>: 
	I0915 07:54:48.166683   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetConfigRaw
	I0915 07:54:48.167418   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetIP
	I0915 07:54:48.170195   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.170624   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:18:f1", ip: ""} in network mk-stopped-upgrade-112030: {Iface:virbr2 ExpiryTime:2024-09-15 08:54:38 +0000 UTC Type:0 Mac:52:54:00:b9:18:f1 Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:stopped-upgrade-112030 Clientid:01:52:54:00:b9:18:f1}
	I0915 07:54:48.170661   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined IP address 192.168.50.194 and MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.170915   53205 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/stopped-upgrade-112030/config.json ...
	I0915 07:54:48.171137   53205 machine.go:93] provisionDockerMachine start ...
	I0915 07:54:48.171160   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .DriverName
	I0915 07:54:48.171400   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHHostname
	I0915 07:54:48.173640   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.174019   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:18:f1", ip: ""} in network mk-stopped-upgrade-112030: {Iface:virbr2 ExpiryTime:2024-09-15 08:54:38 +0000 UTC Type:0 Mac:52:54:00:b9:18:f1 Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:stopped-upgrade-112030 Clientid:01:52:54:00:b9:18:f1}
	I0915 07:54:48.174046   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined IP address 192.168.50.194 and MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.174190   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHPort
	I0915 07:54:48.174369   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHKeyPath
	I0915 07:54:48.174512   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHKeyPath
	I0915 07:54:48.174673   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHUsername
	I0915 07:54:48.174870   53205 main.go:141] libmachine: Using SSH client type: native
	I0915 07:54:48.175106   53205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.194 22 <nil> <nil>}
	I0915 07:54:48.175124   53205 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 07:54:48.305724   53205 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0915 07:54:48.305770   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetMachineName
	I0915 07:54:48.306023   53205 buildroot.go:166] provisioning hostname "stopped-upgrade-112030"
	I0915 07:54:48.306046   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetMachineName
	I0915 07:54:48.306200   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHHostname
	I0915 07:54:48.309460   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.309784   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:18:f1", ip: ""} in network mk-stopped-upgrade-112030: {Iface:virbr2 ExpiryTime:2024-09-15 08:54:38 +0000 UTC Type:0 Mac:52:54:00:b9:18:f1 Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:stopped-upgrade-112030 Clientid:01:52:54:00:b9:18:f1}
	I0915 07:54:48.309827   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined IP address 192.168.50.194 and MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.309996   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHPort
	I0915 07:54:48.310182   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHKeyPath
	I0915 07:54:48.310345   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHKeyPath
	I0915 07:54:48.310488   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHUsername
	I0915 07:54:48.310624   53205 main.go:141] libmachine: Using SSH client type: native
	I0915 07:54:48.310808   53205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.194 22 <nil> <nil>}
	I0915 07:54:48.310823   53205 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-112030 && echo "stopped-upgrade-112030" | sudo tee /etc/hostname
	I0915 07:54:48.460225   53205 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-112030
	
	I0915 07:54:48.460251   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHHostname
	I0915 07:54:48.463377   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.463743   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:18:f1", ip: ""} in network mk-stopped-upgrade-112030: {Iface:virbr2 ExpiryTime:2024-09-15 08:54:38 +0000 UTC Type:0 Mac:52:54:00:b9:18:f1 Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:stopped-upgrade-112030 Clientid:01:52:54:00:b9:18:f1}
	I0915 07:54:48.463772   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined IP address 192.168.50.194 and MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.463917   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHPort
	I0915 07:54:48.464138   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHKeyPath
	I0915 07:54:48.464318   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHKeyPath
	I0915 07:54:48.464489   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHUsername
	I0915 07:54:48.464705   53205 main.go:141] libmachine: Using SSH client type: native
	I0915 07:54:48.464900   53205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.194 22 <nil> <nil>}
	I0915 07:54:48.464918   53205 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-112030' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-112030/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-112030' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:54:48.609090   53205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:54:48.609116   53205 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19644-6166/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-6166/.minikube}
	I0915 07:54:48.609137   53205 buildroot.go:174] setting up certificates
	I0915 07:54:48.609149   53205 provision.go:84] configureAuth start
	I0915 07:54:48.609160   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetMachineName
	I0915 07:54:48.609398   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetIP
	I0915 07:54:48.612393   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.612724   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:18:f1", ip: ""} in network mk-stopped-upgrade-112030: {Iface:virbr2 ExpiryTime:2024-09-15 08:54:38 +0000 UTC Type:0 Mac:52:54:00:b9:18:f1 Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:stopped-upgrade-112030 Clientid:01:52:54:00:b9:18:f1}
	I0915 07:54:48.612749   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined IP address 192.168.50.194 and MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.612879   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHHostname
	I0915 07:54:48.615089   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.615403   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:18:f1", ip: ""} in network mk-stopped-upgrade-112030: {Iface:virbr2 ExpiryTime:2024-09-15 08:54:38 +0000 UTC Type:0 Mac:52:54:00:b9:18:f1 Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:stopped-upgrade-112030 Clientid:01:52:54:00:b9:18:f1}
	I0915 07:54:48.615430   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined IP address 192.168.50.194 and MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.615560   53205 provision.go:143] copyHostCerts
	I0915 07:54:48.615614   53205 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem, removing ...
	I0915 07:54:48.615622   53205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem
	I0915 07:54:48.615699   53205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/key.pem (1679 bytes)
	I0915 07:54:48.615812   53205 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem, removing ...
	I0915 07:54:48.615822   53205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem
	I0915 07:54:48.615856   53205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/ca.pem (1082 bytes)
	I0915 07:54:48.615949   53205 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem, removing ...
	I0915 07:54:48.615967   53205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem
	I0915 07:54:48.616004   53205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-6166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-6166/.minikube/cert.pem (1123 bytes)
	I0915 07:54:48.616084   53205 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-112030 san=[127.0.0.1 192.168.50.194 localhost minikube stopped-upgrade-112030]
	I0915 07:54:48.705452   53205 provision.go:177] copyRemoteCerts
	I0915 07:54:48.705519   53205 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:54:48.705546   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHHostname
	I0915 07:54:48.708390   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.708777   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:18:f1", ip: ""} in network mk-stopped-upgrade-112030: {Iface:virbr2 ExpiryTime:2024-09-15 08:54:38 +0000 UTC Type:0 Mac:52:54:00:b9:18:f1 Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:stopped-upgrade-112030 Clientid:01:52:54:00:b9:18:f1}
	I0915 07:54:48.708806   53205 main.go:141] libmachine: (stopped-upgrade-112030) DBG | domain stopped-upgrade-112030 has defined IP address 192.168.50.194 and MAC address 52:54:00:b9:18:f1 in network mk-stopped-upgrade-112030
	I0915 07:54:48.709029   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHPort
	I0915 07:54:48.709185   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHKeyPath
	I0915 07:54:48.709354   53205 main.go:141] libmachine: (stopped-upgrade-112030) Calling .GetSSHUsername
	I0915 07:54:48.709517   53205 sshutil.go:53] new ssh client: &{IP:192.168.50.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/stopped-upgrade-112030/id_rsa Username:docker}
	I0915 07:54:48.802924   53205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:54:48.826293   53205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-6166/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	
	
	==> CRI-O <==
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.392558711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84de3eaf-571d-429c-b9f0-adc2a8548297 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.392824051Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84d479066af5af926a4aeb0401dc7e5056c4fd69c1d662a69a97906bd9919f60,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726386870182026991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b6ffb5f912454523811e2320cdbb7d77288bc64b1c1c80657f6a6fb9413463,PodSandboxId:de97d3f4e9eede7dd5eda130b4661fa5a2f46f7de97d60efce9ffba779d17cb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726386870180628315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16900bb163bf9f90e59e4a54efab79083eef6ddec6cb8ad30253c1550eedb695,PodSandboxId:0b73dce7f0b583b8f34cc7db8a13594238905c651bcb19f5e2b329dfaa5015ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726386869729634759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b2e09ba379e801f73846d53108adbb0bf5d59c743d0902c03ffd8c545e15b5,PodSandboxId:f9786ebfdec93db96e99e4b720f9d5dda1a72ae4d0fdcc07aa02c441f1de0ec8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726386865543956155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
29e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d950374b796693fe04c61165104e91bbf9c5c6678cd45552d5a65bcdcea002e,PodSandboxId:7524801c19869d1870ad29e2dc6e569213e3dbd38c95df704df060db2490a0f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726386865488479537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d2112c6f8ec30f31a946723ccc1fead00e085f2d5834dc68efe59fcbe1ac9c,PodSandboxId:1ad16f05d91bb7c96b65b58d71149cab29c6fbbc0ff4eb2ef43ec0bc78e3533f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726386865273503295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726386856633697442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b,PodSandboxId:f5d18c6d14fce65ad0173db88f6dee2670b4f6d378e53885e79a032e0011a339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726386854158384470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97362824ba1f0684ad3030bd3e7876e33bb5506258c1f1057f0286118a5d5186,PodSandboxId:00856cdaaea9ff6fb0f0d6d9b7dfbdd2361625f8003052614f5efd36ff0cec8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726386854297825587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92741efd4a1309a3a9b55dab15f2dedda2a36961e396f4cfa0fd564eacc280,PodSandboxId:c7d1cc7d31818ca0478b6677e099d47c97c3a741a700b5736556ee4f6cc1b903,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726386854262405301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb50f4eaa6a0ec69d898eb7021e96cef92373f04aaa72d80d142fe4672cb670,PodSandboxId:f06672ce3f73a51dcdb5bce707bdfa2fb1e8f07551c6383c35b260e20f1a24c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726386854218636780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 829e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673,PodSandboxId:8fb079d276072a6154176ab5d2df4c559987fb943fd42dfc2734560e2b50b584,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726386853992767093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84de3eaf-571d-429c-b9f0-adc2a8548297 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.432363842Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=814d14ae-d463-4928-a485-e62893f3ecc1 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.432437623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=814d14ae-d463-4928-a485-e62893f3ecc1 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.433236641Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48d7c8e3-2f11-46af-935a-bf1e53dfa6f7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.433636614Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386889433616152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48d7c8e3-2f11-46af-935a-bf1e53dfa6f7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.434119812Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=494bae33-b3cd-402f-bfb6-60d4e066fee9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.434171120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=494bae33-b3cd-402f-bfb6-60d4e066fee9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.434403474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84d479066af5af926a4aeb0401dc7e5056c4fd69c1d662a69a97906bd9919f60,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726386870182026991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b6ffb5f912454523811e2320cdbb7d77288bc64b1c1c80657f6a6fb9413463,PodSandboxId:de97d3f4e9eede7dd5eda130b4661fa5a2f46f7de97d60efce9ffba779d17cb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726386870180628315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16900bb163bf9f90e59e4a54efab79083eef6ddec6cb8ad30253c1550eedb695,PodSandboxId:0b73dce7f0b583b8f34cc7db8a13594238905c651bcb19f5e2b329dfaa5015ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726386869729634759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b2e09ba379e801f73846d53108adbb0bf5d59c743d0902c03ffd8c545e15b5,PodSandboxId:f9786ebfdec93db96e99e4b720f9d5dda1a72ae4d0fdcc07aa02c441f1de0ec8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726386865543956155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
29e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d950374b796693fe04c61165104e91bbf9c5c6678cd45552d5a65bcdcea002e,PodSandboxId:7524801c19869d1870ad29e2dc6e569213e3dbd38c95df704df060db2490a0f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726386865488479537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d2112c6f8ec30f31a946723ccc1fead00e085f2d5834dc68efe59fcbe1ac9c,PodSandboxId:1ad16f05d91bb7c96b65b58d71149cab29c6fbbc0ff4eb2ef43ec0bc78e3533f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726386865273503295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726386856633697442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b,PodSandboxId:f5d18c6d14fce65ad0173db88f6dee2670b4f6d378e53885e79a032e0011a339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726386854158384470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97362824ba1f0684ad3030bd3e7876e33bb5506258c1f1057f0286118a5d5186,PodSandboxId:00856cdaaea9ff6fb0f0d6d9b7dfbdd2361625f8003052614f5efd36ff0cec8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726386854297825587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92741efd4a1309a3a9b55dab15f2dedda2a36961e396f4cfa0fd564eacc280,PodSandboxId:c7d1cc7d31818ca0478b6677e099d47c97c3a741a700b5736556ee4f6cc1b903,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726386854262405301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb50f4eaa6a0ec69d898eb7021e96cef92373f04aaa72d80d142fe4672cb670,PodSandboxId:f06672ce3f73a51dcdb5bce707bdfa2fb1e8f07551c6383c35b260e20f1a24c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726386854218636780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 829e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673,PodSandboxId:8fb079d276072a6154176ab5d2df4c559987fb943fd42dfc2734560e2b50b584,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726386853992767093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=494bae33-b3cd-402f-bfb6-60d4e066fee9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.490320688Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19300a83-5ad1-4e98-8e8a-8fc2303fc603 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.490401189Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19300a83-5ad1-4e98-8e8a-8fc2303fc603 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.491695720Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b61359e-b4dd-4ab0-9298-abd907025bda name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.492177952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386889492152434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b61359e-b4dd-4ab0-9298-abd907025bda name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.492967056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c818a39-4163-48f5-9017-3db1b36157ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.493031822Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c818a39-4163-48f5-9017-3db1b36157ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.493268261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84d479066af5af926a4aeb0401dc7e5056c4fd69c1d662a69a97906bd9919f60,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726386870182026991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b6ffb5f912454523811e2320cdbb7d77288bc64b1c1c80657f6a6fb9413463,PodSandboxId:de97d3f4e9eede7dd5eda130b4661fa5a2f46f7de97d60efce9ffba779d17cb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726386870180628315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16900bb163bf9f90e59e4a54efab79083eef6ddec6cb8ad30253c1550eedb695,PodSandboxId:0b73dce7f0b583b8f34cc7db8a13594238905c651bcb19f5e2b329dfaa5015ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726386869729634759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b2e09ba379e801f73846d53108adbb0bf5d59c743d0902c03ffd8c545e15b5,PodSandboxId:f9786ebfdec93db96e99e4b720f9d5dda1a72ae4d0fdcc07aa02c441f1de0ec8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726386865543956155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
29e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d950374b796693fe04c61165104e91bbf9c5c6678cd45552d5a65bcdcea002e,PodSandboxId:7524801c19869d1870ad29e2dc6e569213e3dbd38c95df704df060db2490a0f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726386865488479537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d2112c6f8ec30f31a946723ccc1fead00e085f2d5834dc68efe59fcbe1ac9c,PodSandboxId:1ad16f05d91bb7c96b65b58d71149cab29c6fbbc0ff4eb2ef43ec0bc78e3533f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726386865273503295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726386856633697442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b,PodSandboxId:f5d18c6d14fce65ad0173db88f6dee2670b4f6d378e53885e79a032e0011a339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726386854158384470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97362824ba1f0684ad3030bd3e7876e33bb5506258c1f1057f0286118a5d5186,PodSandboxId:00856cdaaea9ff6fb0f0d6d9b7dfbdd2361625f8003052614f5efd36ff0cec8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726386854297825587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92741efd4a1309a3a9b55dab15f2dedda2a36961e396f4cfa0fd564eacc280,PodSandboxId:c7d1cc7d31818ca0478b6677e099d47c97c3a741a700b5736556ee4f6cc1b903,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726386854262405301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb50f4eaa6a0ec69d898eb7021e96cef92373f04aaa72d80d142fe4672cb670,PodSandboxId:f06672ce3f73a51dcdb5bce707bdfa2fb1e8f07551c6383c35b260e20f1a24c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726386854218636780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 829e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673,PodSandboxId:8fb079d276072a6154176ab5d2df4c559987fb943fd42dfc2734560e2b50b584,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726386853992767093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c818a39-4163-48f5-9017-3db1b36157ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.537787028Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f507a84-7b2f-474d-a647-4859bfeac461 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.537988788Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f507a84-7b2f-474d-a647-4859bfeac461 name=/runtime.v1.RuntimeService/Version
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.539684977Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f147d15-91cf-4c73-a31b-6a66e1bf3cf2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.540292630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386889540259509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f147d15-91cf-4c73-a31b-6a66e1bf3cf2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.540964143Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62d24f04-2c4f-4ff0-a5a3-13cd3fe5e81d name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.541118606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62d24f04-2c4f-4ff0-a5a3-13cd3fe5e81d name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.541503679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84d479066af5af926a4aeb0401dc7e5056c4fd69c1d662a69a97906bd9919f60,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726386870182026991,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b6ffb5f912454523811e2320cdbb7d77288bc64b1c1c80657f6a6fb9413463,PodSandboxId:de97d3f4e9eede7dd5eda130b4661fa5a2f46f7de97d60efce9ffba779d17cb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726386870180628315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16900bb163bf9f90e59e4a54efab79083eef6ddec6cb8ad30253c1550eedb695,PodSandboxId:0b73dce7f0b583b8f34cc7db8a13594238905c651bcb19f5e2b329dfaa5015ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726386869729634759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b2e09ba379e801f73846d53108adbb0bf5d59c743d0902c03ffd8c545e15b5,PodSandboxId:f9786ebfdec93db96e99e4b720f9d5dda1a72ae4d0fdcc07aa02c441f1de0ec8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726386865543956155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
29e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d950374b796693fe04c61165104e91bbf9c5c6678cd45552d5a65bcdcea002e,PodSandboxId:7524801c19869d1870ad29e2dc6e569213e3dbd38c95df704df060db2490a0f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726386865488479537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d2112c6f8ec30f31a946723ccc1fead00e085f2d5834dc68efe59fcbe1ac9c,PodSandboxId:1ad16f05d91bb7c96b65b58d71149cab29c6fbbc0ff4eb2ef43ec0bc78e3533f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726386865273503295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85,PodSandboxId:45104aced3b148d8fab875961a523877f0ca40cf8499095c0fdbf858847f6048,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726386856633697442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8ngzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec3a715-c19d-4fe0-88e5-d2b36fc56640,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b,PodSandboxId:f5d18c6d14fce65ad0173db88f6dee2670b4f6d378e53885e79a032e0011a339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726386854158384470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-8dd9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a4779f5-5b4b-42d4-919a-025dcc1b52a5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97362824ba1f0684ad3030bd3e7876e33bb5506258c1f1057f0286118a5d5186,PodSandboxId:00856cdaaea9ff6fb0f0d6d9b7dfbdd2361625f8003052614f5efd36ff0cec8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726386854297825587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4502dde6e4b482035021a9efb10a323,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92741efd4a1309a3a9b55dab15f2dedda2a36961e396f4cfa0fd564eacc280,PodSandboxId:c7d1cc7d31818ca0478b6677e099d47c97c3a741a700b5736556ee4f6cc1b903,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726386854262405301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-742219,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ac30683a8c8a05a895d1c63585eb16e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb50f4eaa6a0ec69d898eb7021e96cef92373f04aaa72d80d142fe4672cb670,PodSandboxId:f06672ce3f73a51dcdb5bce707bdfa2fb1e8f07551c6383c35b260e20f1a24c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726386854218636780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 829e379fddfad6d8892a76796f0aafae,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673,PodSandboxId:8fb079d276072a6154176ab5d2df4c559987fb943fd42dfc2734560e2b50b584,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726386853992767093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-742219,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1585d66306ff8a19870a02080fd586ce,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62d24f04-2c4f-4ff0-a5a3-13cd3fe5e81d name=/runtime.v1.RuntimeService/ListContainers
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.551399199Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=ba074bb0-130c-4d97-9b33-ea91c628b8de name=/runtime.v1.RuntimeService/Version
	Sep 15 07:54:49 pause-742219 crio[2844]: time="2024-09-15 07:54:49.551484099Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba074bb0-130c-4d97-9b33-ea91c628b8de name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	84d479066af5a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   19 seconds ago      Running             coredns                   2                   45104aced3b14       coredns-7c65d6cfc9-8ngzs
	a0b6ffb5f9124       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   19 seconds ago      Running             kube-proxy                2                   de97d3f4e9eed       kube-proxy-8dd9x
	16900bb163bf9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   19 seconds ago      Running             kube-controller-manager   2                   0b73dce7f0b58       kube-controller-manager-pause-742219
	26b2e09ba379e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   24 seconds ago      Running             kube-scheduler            2                   f9786ebfdec93       kube-scheduler-pause-742219
	5d950374b7966       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   24 seconds ago      Running             etcd                      2                   7524801c19869       etcd-pause-742219
	b3d2112c6f8ec       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   24 seconds ago      Running             kube-apiserver            2                   1ad16f05d91bb       kube-apiserver-pause-742219
	dd16e21e72273       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   32 seconds ago      Exited              coredns                   1                   45104aced3b14       coredns-7c65d6cfc9-8ngzs
	97362824ba1f0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   35 seconds ago      Exited              kube-apiserver            1                   00856cdaaea9f       kube-apiserver-pause-742219
	8a92741efd4a1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   35 seconds ago      Exited              etcd                      1                   c7d1cc7d31818       etcd-pause-742219
	cdb50f4eaa6a0       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   35 seconds ago      Exited              kube-scheduler            1                   f06672ce3f73a       kube-scheduler-pause-742219
	e95202c6eff0b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   35 seconds ago      Exited              kube-proxy                1                   f5d18c6d14fce       kube-proxy-8dd9x
	676cee692affa       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   35 seconds ago      Exited              kube-controller-manager   1                   8fb079d276072       kube-controller-manager-pause-742219
	
	
	==> coredns [84d479066af5af926a4aeb0401dc7e5056c4fd69c1d662a69a97906bd9919f60] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32916 - 16060 "HINFO IN 7423366553452225556.3843291169860854247. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01448077s
	
	
	==> coredns [dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:59113 - 45040 "HINFO IN 1667818845255481148.1975024935033198385. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014178542s
	
	
	==> describe nodes <==
	Name:               pause-742219
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-742219
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=pause-742219
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T07_53_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:53:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-742219
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:54:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:54:29 +0000   Sun, 15 Sep 2024 07:53:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:54:29 +0000   Sun, 15 Sep 2024 07:53:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:54:29 +0000   Sun, 15 Sep 2024 07:53:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:54:29 +0000   Sun, 15 Sep 2024 07:53:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.43
	  Hostname:    pause-742219
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 934c3cc52e394b5689f4b55093133e57
	  System UUID:                934c3cc5-2e39-4b56-89f4-b55093133e57
	  Boot ID:                    377907be-253e-4a79-b331-ca93481f13ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-8ngzs                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     88s
	  kube-system                 etcd-pause-742219                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         93s
	  kube-system                 kube-apiserver-pause-742219             250m (12%)    0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-controller-manager-pause-742219    200m (10%)    0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-proxy-8dd9x                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-scheduler-pause-742219             100m (5%)     0 (0%)      0 (0%)           0 (0%)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 87s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeAllocatableEnforced  93s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  93s (x2 over 93s)  kubelet          Node pause-742219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s (x2 over 93s)  kubelet          Node pause-742219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s (x2 over 93s)  kubelet          Node pause-742219 status is now: NodeHasSufficientPID
	  Normal  Starting                 93s                kubelet          Starting kubelet.
	  Normal  NodeReady                92s                kubelet          Node pause-742219 status is now: NodeReady
	  Normal  RegisteredNode           89s                node-controller  Node pause-742219 event: Registered Node pause-742219 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x2 over 20s)  kubelet          Node pause-742219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x2 over 20s)  kubelet          Node pause-742219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x2 over 20s)  kubelet          Node pause-742219 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node pause-742219 event: Registered Node pause-742219 in Controller
	
	
	==> dmesg <==
	[  +0.065324] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063114] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.212686] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.152066] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.306503] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[Sep15 07:53] systemd-fstab-generator[741]: Ignoring "noauto" option for root device
	[  +0.068941] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.546990] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.628635] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.960296] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[  +0.096462] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.788805] systemd-fstab-generator[1350]: Ignoring "noauto" option for root device
	[  +0.609036] kauditd_printk_skb: 43 callbacks suppressed
	[ +10.905697] kauditd_printk_skb: 64 callbacks suppressed
	[Sep15 07:54] systemd-fstab-generator[2232]: Ignoring "noauto" option for root device
	[  +0.136657] systemd-fstab-generator[2244]: Ignoring "noauto" option for root device
	[  +0.165105] systemd-fstab-generator[2258]: Ignoring "noauto" option for root device
	[  +0.137678] systemd-fstab-generator[2270]: Ignoring "noauto" option for root device
	[  +0.946976] systemd-fstab-generator[2644]: Ignoring "noauto" option for root device
	[  +1.144677] systemd-fstab-generator[3069]: Ignoring "noauto" option for root device
	[  +9.620877] kauditd_printk_skb: 248 callbacks suppressed
	[  +3.164514] systemd-fstab-generator[3717]: Ignoring "noauto" option for root device
	[  +1.852987] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.287374] systemd-fstab-generator[4020]: Ignoring "noauto" option for root device
	[  +0.100875] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [5d950374b796693fe04c61165104e91bbf9c5c6678cd45552d5a65bcdcea002e] <==
	{"level":"info","ts":"2024-09-15T07:54:25.790994Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-15T07:54:25.790967Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"19a433c09434770","initial-advertise-peer-urls":["https://192.168.72.43:2380"],"listen-peer-urls":["https://192.168.72.43:2380"],"advertise-client-urls":["https://192.168.72.43:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.43:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-15T07:54:25.791056Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-15T07:54:25.791170Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.43:2380"}
	{"level":"info","ts":"2024-09-15T07:54:25.791264Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.43:2380"}
	{"level":"info","ts":"2024-09-15T07:54:25.785434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19a433c09434770 switched to configuration voters=(115478665583871856)"}
	{"level":"info","ts":"2024-09-15T07:54:25.791550Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ed568b97e66db48","local-member-id":"19a433c09434770","added-peer-id":"19a433c09434770","added-peer-peer-urls":["https://192.168.72.43:2380"]}
	{"level":"info","ts":"2024-09-15T07:54:25.791753Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ed568b97e66db48","local-member-id":"19a433c09434770","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T07:54:25.791914Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T07:54:27.057801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19a433c09434770 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-15T07:54:27.057926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19a433c09434770 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-15T07:54:27.057963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19a433c09434770 received MsgPreVoteResp from 19a433c09434770 at term 2"}
	{"level":"info","ts":"2024-09-15T07:54:27.057979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19a433c09434770 became candidate at term 3"}
	{"level":"info","ts":"2024-09-15T07:54:27.057985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19a433c09434770 received MsgVoteResp from 19a433c09434770 at term 3"}
	{"level":"info","ts":"2024-09-15T07:54:27.057993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19a433c09434770 became leader at term 3"}
	{"level":"info","ts":"2024-09-15T07:54:27.058000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 19a433c09434770 elected leader 19a433c09434770 at term 3"}
	{"level":"info","ts":"2024-09-15T07:54:27.061146Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"19a433c09434770","local-member-attributes":"{Name:pause-742219 ClientURLs:[https://192.168.72.43:2379]}","request-path":"/0/members/19a433c09434770/attributes","cluster-id":"ed568b97e66db48","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T07:54:27.061149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T07:54:27.061259Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T07:54:27.061738Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T07:54:27.061755Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T07:54:27.062557Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T07:54:27.063316Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T07:54:27.071313Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T07:54:27.072223Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.43:2379"}
	
	
	==> etcd [8a92741efd4a1309a3a9b55dab15f2dedda2a36961e396f4cfa0fd564eacc280] <==
	
	
	==> kernel <==
	 07:54:49 up 2 min,  0 users,  load average: 0.70, 0.26, 0.09
	Linux pause-742219 5.10.207 #1 SMP Sun Sep 15 04:48:27 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [97362824ba1f0684ad3030bd3e7876e33bb5506258c1f1057f0286118a5d5186] <==
	
	
	==> kube-apiserver [b3d2112c6f8ec30f31a946723ccc1fead00e085f2d5834dc68efe59fcbe1ac9c] <==
	I0915 07:54:29.235989       1 shared_informer.go:320] Caches are synced for configmaps
	I0915 07:54:29.238236       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0915 07:54:29.251552       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0915 07:54:29.251605       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0915 07:54:29.254270       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0915 07:54:29.254307       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0915 07:54:29.254395       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0915 07:54:29.256593       1 aggregator.go:171] initial CRD sync complete...
	I0915 07:54:29.256637       1 autoregister_controller.go:144] Starting autoregister controller
	I0915 07:54:29.256644       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0915 07:54:29.256649       1 cache.go:39] Caches are synced for autoregister controller
	I0915 07:54:29.285646       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0915 07:54:29.297992       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0915 07:54:29.306426       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0915 07:54:29.306471       1 policy_source.go:224] refreshing policies
	I0915 07:54:29.334671       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0915 07:54:29.392891       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 07:54:30.187682       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0915 07:54:31.348268       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0915 07:54:31.367704       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0915 07:54:31.431118       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0915 07:54:31.472565       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0915 07:54:31.485479       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0915 07:54:33.274674       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0915 07:54:33.378460       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [16900bb163bf9f90e59e4a54efab79083eef6ddec6cb8ad30253c1550eedb695] <==
	I0915 07:54:32.967420       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0915 07:54:32.967470       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0915 07:54:32.967504       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0915 07:54:32.967953       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0915 07:54:32.968023       1 shared_informer.go:320] Caches are synced for daemon sets
	I0915 07:54:32.967969       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-742219"
	I0915 07:54:32.968103       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0915 07:54:32.973734       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0915 07:54:32.975760       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0915 07:54:32.978718       1 shared_informer.go:320] Caches are synced for attach detach
	I0915 07:54:32.981410       1 shared_informer.go:320] Caches are synced for expand
	I0915 07:54:32.984917       1 shared_informer.go:320] Caches are synced for endpoint
	I0915 07:54:33.004362       1 shared_informer.go:320] Caches are synced for persistent volume
	I0915 07:54:33.037679       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0915 07:54:33.042059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="123.383576ms"
	I0915 07:54:33.042618       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.84µs"
	I0915 07:54:33.068227       1 shared_informer.go:320] Caches are synced for crt configmap
	I0915 07:54:33.166849       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 07:54:33.168354       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0915 07:54:33.177124       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 07:54:33.340829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="11.32744ms"
	I0915 07:54:33.340986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.983µs"
	I0915 07:54:33.609960       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 07:54:33.621645       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 07:54:33.621706       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673] <==
	
	
	==> kube-proxy [a0b6ffb5f912454523811e2320cdbb7d77288bc64b1c1c80657f6a6fb9413463] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 07:54:30.534163       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 07:54:30.552147       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.43"]
	E0915 07:54:30.552239       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 07:54:30.628982       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 07:54:30.629089       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 07:54:30.629140       1 server_linux.go:169] "Using iptables Proxier"
	I0915 07:54:30.636208       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 07:54:30.636737       1 server.go:483] "Version info" version="v1.31.1"
	I0915 07:54:30.638930       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:54:30.641250       1 config.go:199] "Starting service config controller"
	I0915 07:54:30.641366       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 07:54:30.641485       1 config.go:105] "Starting endpoint slice config controller"
	I0915 07:54:30.641582       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 07:54:30.642237       1 config.go:328] "Starting node config controller"
	I0915 07:54:30.642309       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 07:54:30.742583       1 shared_informer.go:320] Caches are synced for node config
	I0915 07:54:30.742944       1 shared_informer.go:320] Caches are synced for service config
	I0915 07:54:30.742986       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b] <==
	
	
	==> kube-scheduler [26b2e09ba379e801f73846d53108adbb0bf5d59c743d0902c03ffd8c545e15b5] <==
	I0915 07:54:26.144941       1 serving.go:386] Generated self-signed cert in-memory
	W0915 07:54:29.183320       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0915 07:54:29.183425       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0915 07:54:29.183436       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0915 07:54:29.183534       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0915 07:54:29.289701       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0915 07:54:29.289745       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:54:29.296108       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0915 07:54:29.296273       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 07:54:29.296313       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 07:54:29.296747       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 07:54:29.397600       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cdb50f4eaa6a0ec69d898eb7021e96cef92373f04aaa72d80d142fe4672cb670] <==
	
	
	==> kubelet <==
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.397415    3724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1585d66306ff8a19870a02080fd586ce-k8s-certs\") pod \"kube-controller-manager-pause-742219\" (UID: \"1585d66306ff8a19870a02080fd586ce\") " pod="kube-system/kube-controller-manager-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.397529    3724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1585d66306ff8a19870a02080fd586ce-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-742219\" (UID: \"1585d66306ff8a19870a02080fd586ce\") " pod="kube-system/kube-controller-manager-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.397657    3724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/829e379fddfad6d8892a76796f0aafae-kubeconfig\") pod \"kube-scheduler-pause-742219\" (UID: \"829e379fddfad6d8892a76796f0aafae\") " pod="kube-system/kube-scheduler-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.397771    3724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/3ac30683a8c8a05a895d1c63585eb16e-etcd-certs\") pod \"etcd-pause-742219\" (UID: \"3ac30683a8c8a05a895d1c63585eb16e\") " pod="kube-system/etcd-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: E0915 07:54:29.414317    3724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-742219\" already exists" pod="kube-system/kube-scheduler-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: E0915 07:54:29.414717    3724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-742219\" already exists" pod="kube-system/kube-apiserver-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: E0915 07:54:29.414885    3724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-pause-742219\" already exists" pod="kube-system/etcd-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: E0915 07:54:29.415346    3724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-742219\" already exists" pod="kube-system/kube-controller-manager-pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.418461    3724 kubelet_node_status.go:111] "Node was previously registered" node="pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.418550    3724 kubelet_node_status.go:75] "Successfully registered node" node="pause-742219"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.418588    3724 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.420090    3724 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.715556    3724 scope.go:117] "RemoveContainer" containerID="676cee692affaa138ffa137ab36407837b68184e0efe439e83452eeef8382673"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.815754    3724 apiserver.go:52] "Watching apiserver"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.862978    3724 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.900983    3724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a4779f5-5b4b-42d4-919a-025dcc1b52a5-lib-modules\") pod \"kube-proxy-8dd9x\" (UID: \"6a4779f5-5b4b-42d4-919a-025dcc1b52a5\") " pod="kube-system/kube-proxy-8dd9x"
	Sep 15 07:54:29 pause-742219 kubelet[3724]: I0915 07:54:29.901633    3724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a4779f5-5b4b-42d4-919a-025dcc1b52a5-xtables-lock\") pod \"kube-proxy-8dd9x\" (UID: \"6a4779f5-5b4b-42d4-919a-025dcc1b52a5\") " pod="kube-system/kube-proxy-8dd9x"
	Sep 15 07:54:30 pause-742219 kubelet[3724]: I0915 07:54:30.121246    3724 scope.go:117] "RemoveContainer" containerID="dd16e21e722732c280126116ad08a14c909ff2f095a7470ece44beff5fc3de85"
	Sep 15 07:54:30 pause-742219 kubelet[3724]: I0915 07:54:30.122354    3724 scope.go:117] "RemoveContainer" containerID="e95202c6eff0b866398a47a1903d0ab469b9df969ff1c017288da5a924d2869b"
	Sep 15 07:54:30 pause-742219 kubelet[3724]: E0915 07:54:30.162147    3724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-742219\" already exists" pod="kube-system/kube-apiserver-pause-742219"
	Sep 15 07:54:33 pause-742219 kubelet[3724]: I0915 07:54:33.306943    3724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 15 07:54:39 pause-742219 kubelet[3724]: E0915 07:54:39.098846    3724 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386879098534501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:54:39 pause-742219 kubelet[3724]: E0915 07:54:39.099317    3724 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386879098534501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:54:49 pause-742219 kubelet[3724]: E0915 07:54:49.101994    3724 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386889101399753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:54:49 pause-742219 kubelet[3724]: E0915 07:54:49.102053    3724 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726386889101399753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-742219 -n pause-742219
helpers_test.go:261: (dbg) Run:  kubectl --context pause-742219 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (45.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (7200.057s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-393758 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9pb8r" [111aa79f-47b0-432d-aa20-7c2dbe1ff11f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 08:29:39.361645   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/no-preload-778087/client.crt: no such file or directory" logger="UnhandledError"
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (34m0s)
		TestNetworkPlugins/group/calico (51s)
		TestNetworkPlugins/group/calico/Start (51s)
		TestNetworkPlugins/group/custom-flannel (27s)
		TestNetworkPlugins/group/custom-flannel/Start (27s)
		TestNetworkPlugins/group/kindnet (1m27s)
		TestNetworkPlugins/group/kindnet/NetCatPod (4s)
		TestStartStop (34m25s)
		TestStartStop/group/default-k8s-diff-port (16m23s)
		TestStartStop/group/default-k8s-diff-port/serial (16m23s)
		TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (1m47s)

                                                
                                                
goroutine 7890 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 33 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0006a6d00, 0xc000887bc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc00090a4b0, {0x4cf86a0, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x411b30?, 0x4db6de0?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc000891040)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000891040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0001c7800)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 3419 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00014a820)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00066f520)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00066f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00066f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00066f520, 0xc001742000)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3406
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 96 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 95
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 52 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0xff
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 51
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x167

                                                
                                                
goroutine 127 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007ee440, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 132
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 126 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 132
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 7811 [IO wait]:
internal/poll.runtime_pollWait(0x7fab24fd0f78, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0018c97a0?, 0xc001adea3b?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0018c97a0, {0xc001adea3b, 0x5c5, 0x5c5})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006ab2a0, {0xc001adea3b?, 0x4?, 0x23b?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001c282a0, {0x37666e0, 0xc000908930})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3766860, 0xc001c282a0}, {0x37666e0, 0xc000908930}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0006ab2a0?, {0x3766860, 0xc001c282a0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0006ab2a0, {0x3766860, 0xc001c282a0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3766860, 0xc001c282a0}, {0x3766760, 0xc0006ab2a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001e13180?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 7810
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 1273 [chan receive, 100 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00266af40, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1297
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 7650 [IO wait]:
internal/poll.runtime_pollWait(0x7fab24fd0d68, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001336c00?, 0xc0012f8000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001336c00, {0xc0012f8000, 0x2000, 0x2000})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc001336c00, {0xc0012f8000?, 0x10?, 0xc0016218a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000ba05b0, {0xc0012f8000?, 0xc0012f8005?, 0x22?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc001ba2c60, {0xc0012f8000?, 0x0?, 0xc001ba2c60?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0001c1438, {0x3768660, 0xc001ba2c60})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0001c1188, {0x7fab3c0c5f80, 0xc0015d2528}, 0xc001621a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0001c1188, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0001c1188, {0xc00143d000, 0x1000, 0x7?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc001393680, {0xc0007b6900, 0x9, 0xc000105340?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3766900, 0xc001393680}, {0xc0007b6900, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0007b6900, 0x9, 0xa126fe?}, {0x3766900?, 0xc001393680?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0007b68c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001621fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001818600)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 7601
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 94 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0007ee410, 0x2d)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00149cd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007ee440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b88170, {0x3767e60, 0xc0012d82d0}, 0x1, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b88170, 0x3b9aca00, 0x0, 0x1, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 127
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 95 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc0006be0e0}, 0xc000a70f50, 0xc000a70f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc0006be0e0}, 0x40?, 0xc000a70f50, 0xc000a70f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc0006be0e0?}, 0xc00066ed00?, 0x55bf60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5a1a45?, 0xc000002000?, 0xc00130c540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 127
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3557 [chan receive]:
testing.(*T).Run(0xc0006a6ea0, {0x2925a50?, 0x375e220?}, 0xc001302db0)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0006a6ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5be
testing.tRunner(0xc0006a6ea0, 0xc00060c600)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3406
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 7812 [IO wait]:
internal/poll.runtime_pollWait(0x7fab3e560c88, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0018c9860?, 0xc00222853f?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0018c9860, {0xc00222853f, 0x7ac1, 0x7ac1})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006ab2c8, {0xc00222853f?, 0xc00009bda8?, 0x7e94?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001c282d0, {0x37666e0, 0xc000908948})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3766860, 0xc001c282d0}, {0x37666e0, 0xc000908948}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0006ab2c8?, {0x3766860, 0xc001c282d0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0006ab2c8, {0x3766860, 0xc001c282d0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3766860, 0xc001c282d0}, {0x3766760, 0xc0006ab2c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001c281e0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 7810
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 7504 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x378f170, 0xc000479420}, {0x3782400, 0xc001a11e20}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x378f170?, 0xc000444230?}, 0x3b9aca00, 0xc0014c1e10?, 0x1, 0xc0014c1c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x378f170, 0xc000444230}, 0xc00189e680, {0xc0018521e0, 0x1c}, {0x294c130, 0x14}, {0x29640b7, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x378f170, 0xc000444230}, 0xc00189e680, {0xc0018521e0, 0x1c}, {0x294f08e?, 0xc0015c9f60?}, {0x55b653?, 0x4b1aaf?}, {0xc000958100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00189e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00189e680, 0xc00060c980)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4259
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4425 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4424
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 7487 [IO wait]:
internal/poll.runtime_pollWait(0x7fab3e5610a8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0017f8200?, 0xc0013c6000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0017f8200, {0xc0013c6000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc0017f8200, {0xc0013c6000?, 0x10?, 0xc0012e88a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000ba05e8, {0xc0013c6000?, 0xc0013c605e?, 0x6f?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc0019b05a0, {0xc0013c6000?, 0x0?, 0xc0019b05a0?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0001c02b8, {0x3768660, 0xc0019b05a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0001c0008, {0x7fab3c0c5f80, 0xc0015d24f8}, 0xc0012e8a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0001c0008, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0001c0008, {0xc0013d9000, 0x1000, 0xc0012ee540?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc001d79560, {0xc0007b6740, 0x9, 0x4cb2c70?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3766900, 0xc001d79560}, {0xc0007b6740, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0007b6740, 0x9, 0x47b965?}, {0x3766900?, 0xc001d79560?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0007b6700)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0012e8fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00082f680)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 7486
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 3420 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00014a820)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00066f6c0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00066f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00066f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00066f6c0, 0xc001742080)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3406
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1651 [chan send, 98 minutes]:
os/exec.(*Cmd).watchCtx(0xc001590a80, 0xc0006bea10)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1288
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 1272 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1297
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1374 [chan send, 98 minutes]:
os/exec.(*Cmd).watchCtx(0xc0015a1500, 0xc0006bfdc0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1373
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 7868 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x378f170, 0xc000562850}, {0x3782400, 0xc002275040}, 0x1, 0x0, 0xc00230fbe0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x378f170?, 0xc00011e1c0?}, 0x3b9aca00, 0xc002173dd8?, 0x1, 0xc002173be0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x378f170, 0xc00011e1c0}, 0xc00189f520, {0xc0017fe0c0, 0xe}, {0x2929a58, 0x7}, {0x2930d01, 0xa}, 0xd18c2e2800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0xc00189f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:163 +0x3c5
testing.tRunner(0xc00189f520, 0xc001637740)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3422
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 7855 [select]:
golang.org/x/net/http2.(*ClientConn).Ping(0xc001818600, {0x378f170, 0xc000562c40})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:3061 +0x2c5
golang.org/x/net/http2.(*ClientConn).healthCheck(0xc001818600)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:876 +0xb1
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 7846 [IO wait]:
internal/poll.runtime_pollWait(0x7fab3e560fa0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001742100?, 0xc001e48000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001742100, {0xc001e48000, 0x3500, 0x3500})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc001742100, {0xc001e48000?, 0x10?, 0xc000a768a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000908b48, {0xc001e48000?, 0xc001e48005?, 0x7fab24fd5640?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc0019b05e8, {0xc001e48000?, 0x0?, 0xc0019b05e8?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc00162e638, {0x3768660, 0xc0019b05e8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00162e388, {0x7fab3c0c5f80, 0xc001c0dc68}, 0xc000a76a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00162e388, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc00162e388, {0xc001dbb000, 0x1000, 0xc00155bc00?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc00131b200, {0xc0007b7000, 0x9, 0x4cb2c70?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3766900, 0xc00131b200}, {0xc0007b7000, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0007b7000, 0x9, 0x47b965?}, {0x3766900?, 0xc00131b200?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0007b6fc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000a76fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001851e00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 7845
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 7636 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 7619
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 7822 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 7821
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 7720 [IO wait]:
internal/poll.runtime_pollWait(0x7fab3e560e98, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001d83b60?, 0xc0021272c8?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d83b60, {0xc0021272c8, 0x4d38, 0x4d38})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000ba0aa8, {0xc0021272c8?, 0xc000206180?, 0xfe2b?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001302ed0, {0x37666e0, 0xc0009088d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3766860, 0xc001302ed0}, {0x37666e0, 0xc0009088d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000ba0aa8?, {0x3766860, 0xc001302ed0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000ba0aa8, {0x3766860, 0xc001302ed0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3766860, 0xc001302ed0}, {0x3766760, 0xc000ba0aa8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 7718
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 1132 [IO wait, 102 minutes]:
internal/poll.runtime_pollWait(0x7fab3e5614c8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00050f800?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00050f800)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc00050f800)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00086ce40)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00086ce40)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc0014b41e0, {0x3781d70, 0xc00086ce40})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc0014b41e0)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc00139fd40?, 0xc00139fd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1129
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 3421 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00014a820)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00066f860)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00066f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00066f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00066f860, 0xc001742180)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3406
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4527 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc001978790, 0x2)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0014f7d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019787c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bfe040, {0x3767e60, 0xc001c28030}, 0x1, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000bfe040, 0x3b9aca00, 0x0, 0x1, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4566
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1533 [chan send, 98 minutes]:
os/exec.(*Cmd).watchCtx(0xc001e66d80, 0xc001e13340)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1532
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3201 [chan receive, 34 minutes]:
testing.(*T).Run(0xc0015fa4e0, {0x2925a4b?, 0x55b79c?}, 0xc001632bb8)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0015fa4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc0015fa4e0, 0x3410a38)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3406 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0006a7040, 0xc001632bb8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 3201
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 7842 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001d28c10, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0015c7580?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d28c40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00089d670, {0x3767e60, 0xc001d63ec0}, 0x1, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00089d670, 0x3b9aca00, 0x0, 0x1, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7823
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 7813 [select]:
os/exec.(*Cmd).watchCtx(0xc001936900, 0xc000065570)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 7810
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3391 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc00139eea0, 0x3410c78)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 3249
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 7721 [select]:
os/exec.(*Cmd).watchCtx(0xc00154e600, 0xc001b16af0)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 7718
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 4259 [chan receive, 2 minutes]:
testing.(*T).Run(0xc00189e1a0, {0x2951f4f?, 0x0?}, 0xc00060c980)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00189e1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00189e1a0, 0xc0014aa200)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3410
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1767 [select, 98 minutes]:
net/http.(*persistConn).readLoop(0xc001386d80)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 1759
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 7843 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc0006be0e0}, 0xc00133e750, 0xc00133e798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc0006be0e0}, 0x0?, 0xc00133e750, 0xc00133e798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc0006be0e0?}, 0xa08ff6?, 0xc001851c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001851c80?, 0x5a1aa4?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7823
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1347 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00266af10, 0x28)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001620d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00266af40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000481b70, {0x3767e60, 0xc0015f8c00}, 0x1, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000481b70, 0x3b9aca00, 0x0, 0x1, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1273
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1348 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc0006be0e0}, 0xc0014d9750, 0xc001622f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc0006be0e0}, 0x60?, 0xc0014d9750, 0xc0014d9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc0006be0e0?}, 0xa08ff6?, 0xc0019abb00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014d97d0?, 0x5a1aa4?, 0xc000064b60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1273
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1349 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1348
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3249 [chan receive, 34 minutes]:
testing.(*T).Run(0xc0015fad00, {0x2925a4b?, 0x55b653?}, 0x3410c78)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0015fad00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0015fad00, 0x3410a80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4424 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc0006be0e0}, 0xc000506750, 0xc000506798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc0006be0e0}, 0x1c?, 0xc000506750, 0xc000506798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc0006be0e0?}, 0xc00189e4e0?, 0x55bf60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005067d0?, 0x5a1aa4?, 0xc0014ab900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4439
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1768 [select, 98 minutes]:
net/http.(*persistConn).writeLoop(0xc001386d80)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 1759
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 3556 [chan receive]:
testing.(*T).Run(0xc0006a6820, {0x2925a50?, 0x375e220?}, 0xc001c281e0)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0006a6820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5be
testing.tRunner(0xc0006a6820, 0xc00060c580)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3406
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3422 [chan receive]:
testing.(*T).Run(0xc00066fa00, {0x292ea7a?, 0x375e220?}, 0xc001637740)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00066fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:148 +0x86b
testing.tRunner(0xc00066fa00, 0xc001742200)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3406
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3924 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001978390, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001a38d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019783c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019a68b0, {0x3767e60, 0xc001c0e630}, 0x1, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019a68b0, 0x3b9aca00, 0x0, 0x1, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3851
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4438 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4450
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3926 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3925
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3925 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc0006be0e0}, 0xc000508f50, 0xc0013eef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc0006be0e0}, 0xe0?, 0xc000508f50, 0xc000508f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc0006be0e0?}, 0xc00189f040?, 0x55bf60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000508fd0?, 0x5a1aa4?, 0xc00097b180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3851
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 7640 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0007ee790, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0014d9580?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007ee7c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001db2ae0, {0x3767e60, 0xc00141a780}, 0x1, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001db2ae0, 0x3b9aca00, 0x0, 0x1, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7637
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3851 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019783c0, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3920
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3410 [chan receive, 16 minutes]:
testing.(*T).Run(0xc00139f520, {0x2927010?, 0x0?}, 0xc0014aa200)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00139f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00139f520, 0xc00266ab00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3391
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 7718 [syscall]:
syscall.Syscall6(0xf7, 0x3, 0x18, 0xc0014a1c50, 0x4, 0xc0000695f0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc001aee3a8?)
	/usr/local/go/src/os/pidfd_linux.go:92 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc00154e600)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc00154e600)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc00139e9c0, 0xc00154e600)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc00139e9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc00139e9c0, 0xc001302db0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3557
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 7810 [syscall]:
syscall.Syscall6(0xf7, 0x3, 0x16, 0xc0013ebc50, 0x4, 0xc0015e4b40, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc001ba2e10?)
	/usr/local/go/src/os/pidfd_linux.go:92 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001936900)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc001936900)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc00189f040, 0xc001936900)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc00189f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc00189f040, 0xc001c281e0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3556
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 7823 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d28c40, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 7821
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 7719 [IO wait]:
internal/poll.runtime_pollWait(0x7fab3e5615d0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001d83a40?, 0xc00161faf9?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d83a40, {0xc00161faf9, 0x507, 0x507})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000ba0a78, {0xc00161faf9?, 0xc0013385a8?, 0x223?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001302e70, {0x37666e0, 0xc0006aad80})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3766860, 0xc001302e70}, {0x37666e0, 0xc0006aad80}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000ba0a78?, {0x3766860, 0xc001302e70})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000ba0a78, {0x3766860, 0xc001302e70})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3766860, 0xc001302e70}, {0x3766760, 0xc000ba0a78}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001302db0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 7718
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 4423 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00266bed0, 0x2)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000885d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00266bf00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00191b5e0, {0x3767e60, 0xc00192f4d0}, 0x1, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00191b5e0, 0x3b9aca00, 0x0, 0x1, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4439
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3850 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3920
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 7641 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc0006be0e0}, 0xc001dce750, 0xc001dce798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc0006be0e0}, 0x0?, 0xc001dce750, 0xc001dce798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc0006be0e0?}, 0xa08ff6?, 0xc00082fb00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00082fb00?, 0x13?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7637
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4566 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019787c0, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4564
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4439 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00266bf00, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4450
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4528 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc0006be0e0}, 0xc00149ef50, 0xc00149ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc0006be0e0}, 0x0?, 0xc00149ef50, 0xc00149ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc0006be0e0?}, 0xa08ff6?, 0xc00082e900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00082e900?, 0x5a1aa4?, 0xc00266abc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4566
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 7637 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007ee7c0, 0xc0006be0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 7619
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4529 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4528
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 7844 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 7843
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4565 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4564
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 7642 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 7641
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                    

Test pass (174/222)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 31.13
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 20.75
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 113.98
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 141.4
31 TestAddons/serial/GCPAuth/Namespaces 0.15
35 TestAddons/parallel/InspektorGadget 10.84
37 TestAddons/parallel/HelmTiller 10.83
39 TestAddons/parallel/CSI 57.29
40 TestAddons/parallel/Headlamp 20.74
41 TestAddons/parallel/CloudSpanner 5.71
42 TestAddons/parallel/LocalPath 57.69
43 TestAddons/parallel/NvidiaDevicePlugin 6.5
44 TestAddons/parallel/Yakd 11.95
45 TestAddons/StoppedEnableDisable 7.55
46 TestCertOptions 52.03
47 TestCertExpiration 264.44
49 TestForceSystemdFlag 51.48
50 TestForceSystemdEnv 97.43
52 TestKVMDriverInstallOrUpdate 5.13
56 TestErrorSpam/setup 42.22
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.7
59 TestErrorSpam/pause 1.54
60 TestErrorSpam/unpause 1.75
61 TestErrorSpam/stop 5.76
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 81.72
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 41.92
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
73 TestFunctional/serial/CacheCmd/cache/add_local 2.26
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 32.4
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.33
84 TestFunctional/serial/LogsFileCmd 1.37
85 TestFunctional/serial/InvalidService 3.94
87 TestFunctional/parallel/ConfigCmd 0.3
88 TestFunctional/parallel/DashboardCmd 14.3
89 TestFunctional/parallel/DryRun 0.29
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.97
95 TestFunctional/parallel/ServiceCmdConnect 10.48
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 44.9
99 TestFunctional/parallel/SSHCmd 0.46
100 TestFunctional/parallel/CpCmd 1.33
102 TestFunctional/parallel/FileSync 0.2
103 TestFunctional/parallel/CertSync 1.82
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
111 TestFunctional/parallel/License 0.65
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.19
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
114 TestFunctional/parallel/ProfileCmd/profile_list 0.32
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
116 TestFunctional/parallel/MountCmd/any-port 9.58
117 TestFunctional/parallel/ServiceCmd/List 0.29
118 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
119 TestFunctional/parallel/Version/short 0.05
120 TestFunctional/parallel/Version/components 0.47
121 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
126 TestFunctional/parallel/ImageCommands/ImageBuild 6.26
127 TestFunctional/parallel/ImageCommands/Setup 1.99
128 TestFunctional/parallel/ServiceCmd/Format 0.36
129 TestFunctional/parallel/ServiceCmd/URL 0.39
130 TestFunctional/parallel/MountCmd/specific-port 1.96
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.75
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.54
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.97
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.75
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.87
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 200.46
158 TestMultiControlPlane/serial/DeployApp 8.04
159 TestMultiControlPlane/serial/PingHostFromPods 1.25
160 TestMultiControlPlane/serial/AddWorkerNode 57.79
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
163 TestMultiControlPlane/serial/CopyFile 12.46
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.55
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
172 TestMultiControlPlane/serial/RestartCluster 352.64
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
174 TestMultiControlPlane/serial/AddSecondaryNode 80.92
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
179 TestJSONOutput/start/Command 56.23
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.71
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.61
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.34
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 87.46
211 TestMountStart/serial/StartWithMountFirst 30.82
212 TestMountStart/serial/VerifyMountFirst 0.35
213 TestMountStart/serial/StartWithMountSecond 24.53
214 TestMountStart/serial/VerifyMountSecond 0.35
215 TestMountStart/serial/DeleteFirst 0.68
216 TestMountStart/serial/VerifyMountPostDelete 0.35
217 TestMountStart/serial/Stop 1.26
218 TestMountStart/serial/RestartStopped 21.29
219 TestMountStart/serial/VerifyMountPostStop 0.35
222 TestMultiNode/serial/FreshStart2Nodes 111.19
223 TestMultiNode/serial/DeployApp2Nodes 6.27
224 TestMultiNode/serial/PingHostFrom2Pods 0.78
225 TestMultiNode/serial/AddNode 50.94
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.22
228 TestMultiNode/serial/CopyFile 7.07
229 TestMultiNode/serial/StopNode 2.35
230 TestMultiNode/serial/StartAfterStop 39.46
232 TestMultiNode/serial/DeleteNode 2.01
234 TestMultiNode/serial/RestartMultiNode 180.76
235 TestMultiNode/serial/ValidateNameConflict 47.56
242 TestScheduledStopUnix 110.43
246 TestRunningBinaryUpgrade 155.46
251 TestPause/serial/Start 168.85
252 TestStoppedBinaryUpgrade/Setup 2.28
253 TestStoppedBinaryUpgrade/Upgrade 121.58
255 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
275 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
276 TestNoKubernetes/serial/StartWithK8s 48.76
277 TestNoKubernetes/serial/StartWithStopK8s 17.96
278 TestNoKubernetes/serial/Start 26.88
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
282 TestNoKubernetes/serial/ProfileList 1.08
283 TestNoKubernetes/serial/Stop 1.29
284 TestNoKubernetes/serial/StartNoArgs 43.45
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
x
+
TestDownloadOnly/v1.20.0/json-events (31.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-832723 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-832723 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (31.129775119s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (31.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-832723
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-832723: exit status 85 (57.537071ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-832723 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |          |
	|         | -p download-only-832723        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:29:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:29:41.123031   13202 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:29:41.123258   13202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:41.123267   13202 out.go:358] Setting ErrFile to fd 2...
	I0915 06:29:41.123271   13202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:41.123451   13202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	W0915 06:29:41.123575   13202 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19644-6166/.minikube/config/config.json: open /home/jenkins/minikube-integration/19644-6166/.minikube/config/config.json: no such file or directory
	I0915 06:29:41.124133   13202 out.go:352] Setting JSON to true
	I0915 06:29:41.125015   13202 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":727,"bootTime":1726381054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:29:41.125109   13202 start.go:139] virtualization: kvm guest
	I0915 06:29:41.127758   13202 out.go:97] [download-only-832723] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0915 06:29:41.127865   13202 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball: no such file or directory
	I0915 06:29:41.127909   13202 notify.go:220] Checking for updates...
	I0915 06:29:41.129287   13202 out.go:169] MINIKUBE_LOCATION=19644
	I0915 06:29:41.130831   13202 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:29:41.132216   13202 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 06:29:41.133473   13202 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 06:29:41.134879   13202 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0915 06:29:41.137282   13202 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 06:29:41.137544   13202 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:29:41.239843   13202 out.go:97] Using the kvm2 driver based on user configuration
	I0915 06:29:41.239878   13202 start.go:297] selected driver: kvm2
	I0915 06:29:41.239889   13202 start.go:901] validating driver "kvm2" against <nil>
	I0915 06:29:41.240341   13202 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:29:41.240511   13202 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 06:29:41.255373   13202 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 06:29:41.255433   13202 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:29:41.255994   13202 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0915 06:29:41.256139   13202 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 06:29:41.256166   13202 cni.go:84] Creating CNI manager for ""
	I0915 06:29:41.256211   13202 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 06:29:41.256219   13202 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 06:29:41.256283   13202 start.go:340] cluster config:
	{Name:download-only-832723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-832723 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:29:41.256486   13202 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:29:41.258119   13202 out.go:97] Downloading VM boot image ...
	I0915 06:29:41.258167   13202 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19644-6166/.minikube/cache/iso/amd64/minikube-v1.34.0-1726358414-19644-amd64.iso
	I0915 06:29:55.375697   13202 out.go:97] Starting "download-only-832723" primary control-plane node in "download-only-832723" cluster
	I0915 06:29:55.375730   13202 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0915 06:29:55.482959   13202 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0915 06:29:55.482990   13202 cache.go:56] Caching tarball of preloaded images
	I0915 06:29:55.483144   13202 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0915 06:29:55.485207   13202 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0915 06:29:55.485224   13202 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0915 06:29:55.597840   13202 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0915 06:30:09.642825   13202 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0915 06:30:09.642918   13202 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0915 06:30:10.550598   13202 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0915 06:30:10.551369   13202 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/download-only-832723/config.json ...
	I0915 06:30:10.551402   13202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/download-only-832723/config.json: {Name:mk64f0b43b145c19f380b96bfac3cf6905a4666a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:10.551553   13202 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0915 06:30:10.551719   13202 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-832723 host does not exist
	  To start a cluster, run: "minikube start -p download-only-832723"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-832723
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (20.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-119130 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-119130 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (20.747101602s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (20.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-119130
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-119130: exit status 85 (58.669728ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-832723 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | -p download-only-832723        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| delete  | -p download-only-832723        | download-only-832723 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| start   | -o=json --download-only        | download-only-119130 | jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | -p download-only-119130        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:30:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:30:12.565830   13490 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:30:12.565931   13490 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:30:12.565942   13490 out.go:358] Setting ErrFile to fd 2...
	I0915 06:30:12.565949   13490 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:30:12.566143   13490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 06:30:12.566726   13490 out.go:352] Setting JSON to true
	I0915 06:30:12.567559   13490 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":759,"bootTime":1726381054,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:30:12.567650   13490 start.go:139] virtualization: kvm guest
	I0915 06:30:12.569909   13490 out.go:97] [download-only-119130] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 06:30:12.570050   13490 notify.go:220] Checking for updates...
	I0915 06:30:12.571824   13490 out.go:169] MINIKUBE_LOCATION=19644
	I0915 06:30:12.573549   13490 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:30:12.574973   13490 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 06:30:12.576368   13490 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 06:30:12.577766   13490 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0915 06:30:12.580571   13490 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 06:30:12.580770   13490 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:30:12.613306   13490 out.go:97] Using the kvm2 driver based on user configuration
	I0915 06:30:12.613341   13490 start.go:297] selected driver: kvm2
	I0915 06:30:12.613350   13490 start.go:901] validating driver "kvm2" against <nil>
	I0915 06:30:12.613663   13490 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:30:12.613740   13490 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19644-6166/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0915 06:30:12.628762   13490 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0915 06:30:12.628829   13490 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:30:12.629382   13490 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0915 06:30:12.629518   13490 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 06:30:12.629545   13490 cni.go:84] Creating CNI manager for ""
	I0915 06:30:12.629590   13490 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0915 06:30:12.629599   13490 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 06:30:12.629650   13490 start.go:340] cluster config:
	{Name:download-only-119130 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-119130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:30:12.629739   13490 iso.go:125] acquiring lock: {Name:mk4d5594f79a5f26d6f982b61924509003bb3fe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:30:12.631579   13490 out.go:97] Starting "download-only-119130" primary control-plane node in "download-only-119130" cluster
	I0915 06:30:12.631599   13490 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:30:13.224131   13490 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 06:30:13.224159   13490 cache.go:56] Caching tarball of preloaded images
	I0915 06:30:13.224332   13490 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:30:13.226546   13490 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0915 06:30:13.226570   13490 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0915 06:30:13.337742   13490 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 06:30:31.596473   13490 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0915 06:30:31.596580   13490 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19644-6166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0915 06:30:32.331298   13490 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 06:30:32.331667   13490 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/download-only-119130/config.json ...
	I0915 06:30:32.331697   13490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/download-only-119130/config.json: {Name:mkafe3cd9f81713fe1c21d52fd5cf72dff63620a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:32.331878   13490 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:30:32.332055   13490 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19644-6166/.minikube/cache/linux/amd64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-119130 host does not exist
	  To start a cluster, run: "minikube start -p download-only-119130"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-119130
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-702457 --alsologtostderr --binary-mirror http://127.0.0.1:37011 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-702457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-702457
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (113.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-727172 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-727172 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m52.941480096s)
helpers_test.go:175: Cleaning up "offline-crio-727172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-727172
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-727172: (1.042624217s)
--- PASS: TestOffline (113.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-368929
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-368929: exit status 85 (55.225638ms)

                                                
                                                
-- stdout --
	* Profile "addons-368929" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-368929"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-368929
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-368929: exit status 85 (55.997012ms)

                                                
                                                
-- stdout --
	* Profile "addons-368929" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-368929"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (141.4s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-368929 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-368929 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m21.395108827s)
--- PASS: TestAddons/Setup (141.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-368929 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-368929 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-c49qm" [39093114-74ee-4ef8-895c-6694ca3debde] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004985465s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-368929
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-368929: (5.833994155s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.430376ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-cw67q" [6012a392-8d4a-4d69-a877-31fa7f992089] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003952915s
addons_test.go:475: (dbg) Run:  kubectl --context addons-368929 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-368929 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.197590263s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 10.54837ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-368929 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-368929 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3d4e678b-df95-4ee6-97bc-b6e67121fe95] Pending
helpers_test.go:344: "task-pv-pod" [3d4e678b-df95-4ee6-97bc-b6e67121fe95] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3d4e678b-df95-4ee6-97bc-b6e67121fe95] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.021965346s
addons_test.go:590: (dbg) Run:  kubectl --context addons-368929 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-368929 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-368929 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-368929 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-368929 delete pod task-pv-pod: (1.405514987s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-368929 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-368929 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-368929 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c6e259df-ec34-4960-8424-0fa0453a438b] Pending
helpers_test.go:344: "task-pv-pod-restore" [c6e259df-ec34-4960-8424-0fa0453a438b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c6e259df-ec34-4960-8424-0fa0453a438b] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003645841s
addons_test.go:632: (dbg) Run:  kubectl --context addons-368929 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-368929 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-368929 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-368929 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.789087122s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-368929 addons disable volumesnapshots --alsologtostderr -v=1: (1.032766569s)
--- PASS: TestAddons/parallel/CSI (57.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-368929 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-dsgxd" [64ea1db7-92cc-4672-968b-80770eb4553b] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-dsgxd" [64ea1db7-92cc-4672-968b-80770eb4553b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-dsgxd" [64ea1db7-92cc-4672-968b-80770eb4553b] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004377465s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-368929 addons disable headlamp --alsologtostderr -v=1: (5.778606034s)
--- PASS: TestAddons/parallel/Headlamp (20.74s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-46jgp" [e088606f-156d-4309-abbd-0b5da17b1be4] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003585827s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-368929
--- PASS: TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.69s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-368929 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-368929 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-368929 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [dd7c7903-33a9-4734-a29f-ea17e9fdc4e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [dd7c7903-33a9-4734-a29f-ea17e9fdc4e3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [dd7c7903-33a9-4734-a29f-ea17e9fdc4e3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003595257s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-368929 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 ssh "cat /opt/local-path-provisioner/pvc-37b863f6-d527-401f-89ba-956f4262c0c9_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-368929 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-368929 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-368929 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.794088952s)
--- PASS: TestAddons/parallel/LocalPath (57.69s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kl795" [d0981521-b267-4cf9-82e3-73ca27f55631] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003887577s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-368929
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-7nhfp" [58fcd9ae-6f81-434d-881c-71c6593e024a] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003544224s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-368929 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-368929 addons disable yakd --alsologtostderr -v=1: (5.943033878s)
--- PASS: TestAddons/parallel/Yakd (11.95s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-368929
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-368929: (7.281557615s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-368929
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-368929
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-368929
--- PASS: TestAddons/StoppedEnableDisable (7.55s)

                                                
                                    
x
+
TestCertOptions (52.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-155903 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-155903 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (50.592425591s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-155903 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-155903 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-155903 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-155903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-155903
--- PASS: TestCertOptions (52.03s)

                                                
                                    
x
+
TestCertExpiration (264.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-773617 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-773617 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (45.574454135s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-773617 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-773617 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (38.091623734s)
helpers_test.go:175: Cleaning up "cert-expiration-773617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-773617
--- PASS: TestCertExpiration (264.44s)

                                                
                                    
x
+
TestForceSystemdFlag (51.48s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-142456 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-142456 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (50.282096492s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-142456 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-142456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-142456
--- PASS: TestForceSystemdFlag (51.48s)

                                                
                                    
x
+
TestForceSystemdEnv (97.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-756859 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-756859 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m36.615848948s)
helpers_test.go:175: Cleaning up "force-systemd-env-756859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-756859
--- PASS: TestForceSystemdEnv (97.43s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.13s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.13s)

                                                
                                    
x
+
TestErrorSpam/setup (42.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-478829 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-478829 --driver=kvm2  --container-runtime=crio
E0915 06:47:56.198134   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:56.205187   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:56.216568   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:56.238002   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:56.279469   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:56.361009   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:56.522536   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:56.844319   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:57.486613   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:58.768665   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-478829 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-478829 --driver=kvm2  --container-runtime=crio: (42.215859228s)
--- PASS: TestErrorSpam/setup (42.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 start --dry-run
E0915 06:48:01.329937   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (5.76s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 stop
E0915 06:48:06.451507   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 stop: (2.29537376s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 stop: (2.003866883s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-478829 --log_dir /tmp/nospam-478829 stop: (1.464711766s)
--- PASS: TestErrorSpam/stop (5.76s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19644-6166/.minikube/files/etc/test/nested/copy/13190/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.72s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-884523 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0915 06:48:16.692887   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:48:37.175021   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:49:18.136984   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-884523 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m21.72202141s)
--- PASS: TestFunctional/serial/StartWithProxy (81.72s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-884523 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-884523 --alsologtostderr -v=8: (41.920346141s)
functional_test.go:663: soft start took 41.9209647s for "functional-884523" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-884523 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-884523 cache add registry.k8s.io/pause:3.1: (1.102167187s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-884523 cache add registry.k8s.io/pause:3.3: (1.222361992s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-884523 cache add registry.k8s.io/pause:latest: (1.151146785s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-884523 /tmp/TestFunctionalserialCacheCmdcacheadd_local949689304/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 cache add minikube-local-cache-test:functional-884523
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-884523 cache add minikube-local-cache-test:functional-884523: (1.947962294s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 cache delete minikube-local-cache-test:functional-884523
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-884523
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884523 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (214.554255ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-884523 cache reload: (1.005981162s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 kubectl -- --context functional-884523 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-884523 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.4s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-884523 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0915 06:50:40.060927   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-884523 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.39717139s)
functional_test.go:761: restart took 32.397264228s for "functional-884523" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.40s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-884523 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-884523 logs: (1.32721693s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 logs --file /tmp/TestFunctionalserialLogsFileCmd2059998694/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-884523 logs --file /tmp/TestFunctionalserialLogsFileCmd2059998694/001/logs.txt: (1.372830314s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-884523 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-884523
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-884523: exit status 115 (272.211762ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.88:32598 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-884523 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884523 config get cpus: exit status 14 (52.923845ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884523 config get cpus: exit status 14 (42.089336ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-884523 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-884523 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23960: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.30s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-884523 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-884523 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (144.780738ms)

                                                
                                                
-- stdout --
	* [functional-884523] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:51:16.235862   23248 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:51:16.236101   23248 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:16.236109   23248 out.go:358] Setting ErrFile to fd 2...
	I0915 06:51:16.236113   23248 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:16.236284   23248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 06:51:16.236780   23248 out.go:352] Setting JSON to false
	I0915 06:51:16.237985   23248 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2022,"bootTime":1726381054,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:51:16.238077   23248 start.go:139] virtualization: kvm guest
	I0915 06:51:16.240359   23248 out.go:177] * [functional-884523] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 06:51:16.241700   23248 notify.go:220] Checking for updates...
	I0915 06:51:16.241709   23248 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:51:16.243158   23248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:51:16.244531   23248 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 06:51:16.245984   23248 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 06:51:16.247245   23248 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:51:16.248590   23248 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:51:16.250527   23248 config.go:182] Loaded profile config "functional-884523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:51:16.251158   23248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:51:16.251217   23248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:51:16.265998   23248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35603
	I0915 06:51:16.266470   23248 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:51:16.267041   23248 main.go:141] libmachine: Using API Version  1
	I0915 06:51:16.267093   23248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:51:16.267425   23248 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:51:16.267613   23248 main.go:141] libmachine: (functional-884523) Calling .DriverName
	I0915 06:51:16.267838   23248 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:51:16.268113   23248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:51:16.268174   23248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:51:16.282674   23248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0915 06:51:16.283103   23248 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:51:16.283703   23248 main.go:141] libmachine: Using API Version  1
	I0915 06:51:16.283733   23248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:51:16.284000   23248 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:51:16.284165   23248 main.go:141] libmachine: (functional-884523) Calling .DriverName
	I0915 06:51:16.330264   23248 out.go:177] * Using the kvm2 driver based on existing profile
	I0915 06:51:16.331690   23248 start.go:297] selected driver: kvm2
	I0915 06:51:16.331736   23248 start.go:901] validating driver "kvm2" against &{Name:functional-884523 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-884523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:51:16.331889   23248 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:51:16.334224   23248 out.go:201] 
	W0915 06:51:16.335530   23248 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0915 06:51:16.336899   23248 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-884523 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-884523 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-884523 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (139.504369ms)

                                                
                                                
-- stdout --
	* [functional-884523] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:51:16.529007   23332 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:51:16.529102   23332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:16.529112   23332 out.go:358] Setting ErrFile to fd 2...
	I0915 06:51:16.529116   23332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:16.529369   23332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 06:51:16.529973   23332 out.go:352] Setting JSON to false
	I0915 06:51:16.531180   23332 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2022,"bootTime":1726381054,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:51:16.531295   23332 start.go:139] virtualization: kvm guest
	I0915 06:51:16.533273   23332 out.go:177] * [functional-884523] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0915 06:51:16.534517   23332 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:51:16.534522   23332 notify.go:220] Checking for updates...
	I0915 06:51:16.536923   23332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:51:16.538209   23332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	I0915 06:51:16.539381   23332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	I0915 06:51:16.540689   23332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:51:16.541887   23332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:51:16.543582   23332 config.go:182] Loaded profile config "functional-884523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:51:16.544133   23332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:51:16.544184   23332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:51:16.561259   23332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32993
	I0915 06:51:16.561694   23332 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:51:16.562246   23332 main.go:141] libmachine: Using API Version  1
	I0915 06:51:16.562270   23332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:51:16.562672   23332 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:51:16.562863   23332 main.go:141] libmachine: (functional-884523) Calling .DriverName
	I0915 06:51:16.563119   23332 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:51:16.563455   23332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 06:51:16.563495   23332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 06:51:16.580483   23332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37293
	I0915 06:51:16.580941   23332 main.go:141] libmachine: () Calling .GetVersion
	I0915 06:51:16.581486   23332 main.go:141] libmachine: Using API Version  1
	I0915 06:51:16.581513   23332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 06:51:16.581887   23332 main.go:141] libmachine: () Calling .GetMachineName
	I0915 06:51:16.582090   23332 main.go:141] libmachine: (functional-884523) Calling .DriverName
	I0915 06:51:16.616130   23332 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0915 06:51:16.617515   23332 start.go:297] selected driver: kvm2
	I0915 06:51:16.617529   23332 start.go:901] validating driver "kvm2" against &{Name:functional-884523 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-884523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:51:16.617637   23332 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:51:16.620062   23332 out.go:201] 
	W0915 06:51:16.621525   23332 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0915 06:51:16.622918   23332 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-884523 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-884523 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-k9nzw" [de601a7a-5f20-44f0-a7ec-96830b9b63eb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-k9nzw" [de601a7a-5f20-44f0-a7ec-96830b9b63eb] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004952627s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.88:30756
functional_test.go:1675: http://192.168.39.88:30756: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-k9nzw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.88:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.88:30756
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.48s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7a108e05-ec86-456a-82cc-97a79c63fa54] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003557653s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-884523 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-884523 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-884523 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-884523 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c422b645-9d72-4521-9ad2-4da108949340] Pending
helpers_test.go:344: "sp-pod" [c422b645-9d72-4521-9ad2-4da108949340] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c422b645-9d72-4521-9ad2-4da108949340] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 30.004311245s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-884523 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-884523 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-884523 delete -f testdata/storage-provisioner/pod.yaml: (1.190356722s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-884523 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [94e9315c-2067-45cd-93db-729812f6b525] Pending
helpers_test.go:344: "sp-pod" [94e9315c-2067-45cd-93db-729812f6b525] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [94e9315c-2067-45cd-93db-729812f6b525] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003391938s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-884523 exec sp-pod -- ls /tmp/mount
E0915 06:52:56.197039   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:23.902783   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:57:56.196744   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh -n functional-884523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 cp functional-884523:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd739382266/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh -n functional-884523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh -n functional-884523 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/13190/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "sudo cat /etc/test/nested/copy/13190/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/13190.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "sudo cat /etc/ssl/certs/13190.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/13190.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "sudo cat /usr/share/ca-certificates/13190.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/131902.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "sudo cat /etc/ssl/certs/131902.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/131902.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "sudo cat /usr/share/ca-certificates/131902.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-884523 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884523 ssh "sudo systemctl is-active docker": exit status 1 (259.218565ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884523 ssh "sudo systemctl is-active containerd": exit status 1 (272.815678ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-884523 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-884523 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-9nmz9" [7c4089cb-f6a9-46b8-b03a-bc1055397383] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-9nmz9" [7c4089cb-f6a9-46b8-b03a-bc1055397383] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003958327s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "278.378409ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.209606ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "264.059725ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.06452ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-884523 /tmp/TestFunctionalparallelMountCmdany-port3005022667/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726383065932779370" to /tmp/TestFunctionalparallelMountCmdany-port3005022667/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726383065932779370" to /tmp/TestFunctionalparallelMountCmdany-port3005022667/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726383065932779370" to /tmp/TestFunctionalparallelMountCmdany-port3005022667/001/test-1726383065932779370
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884523 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (237.946031ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 15 06:51 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 15 06:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 15 06:51 test-1726383065932779370
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh cat /mount-9p/test-1726383065932779370
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-884523 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [178fb322-272c-438b-9d7c-b77cdbc47499] Pending
helpers_test.go:344: "busybox-mount" [178fb322-272c-438b-9d7c-b77cdbc47499] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [178fb322-272c-438b-9d7c-b77cdbc47499] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [178fb322-272c-438b-9d7c-b77cdbc47499] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004426296s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-884523 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-884523 /tmp/TestFunctionalparallelMountCmdany-port3005022667/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 service list -o json
functional_test.go:1494: Took "369.448322ms" to run "out/minikube-linux-amd64 -p functional-884523 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.88:31683
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-884523 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-884523
localhost/kicbase/echo-server:functional-884523
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-884523 image ls --format short --alsologtostderr:
I0915 06:51:26.748739   24369 out.go:345] Setting OutFile to fd 1 ...
I0915 06:51:26.749170   24369 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:26.749225   24369 out.go:358] Setting ErrFile to fd 2...
I0915 06:51:26.749244   24369 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:26.749742   24369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
I0915 06:51:26.750636   24369 config.go:182] Loaded profile config "functional-884523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:26.750749   24369 config.go:182] Loaded profile config "functional-884523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:26.751119   24369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0915 06:51:26.751163   24369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0915 06:51:26.765838   24369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
I0915 06:51:26.766317   24369 main.go:141] libmachine: () Calling .GetVersion
I0915 06:51:26.766817   24369 main.go:141] libmachine: Using API Version  1
I0915 06:51:26.766838   24369 main.go:141] libmachine: () Calling .SetConfigRaw
I0915 06:51:26.767183   24369 main.go:141] libmachine: () Calling .GetMachineName
I0915 06:51:26.767342   24369 main.go:141] libmachine: (functional-884523) Calling .GetState
I0915 06:51:26.769000   24369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0915 06:51:26.769045   24369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0915 06:51:26.783962   24369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38211
I0915 06:51:26.784398   24369 main.go:141] libmachine: () Calling .GetVersion
I0915 06:51:26.784812   24369 main.go:141] libmachine: Using API Version  1
I0915 06:51:26.784826   24369 main.go:141] libmachine: () Calling .SetConfigRaw
I0915 06:51:26.785181   24369 main.go:141] libmachine: () Calling .GetMachineName
I0915 06:51:26.785363   24369 main.go:141] libmachine: (functional-884523) Calling .DriverName
I0915 06:51:26.785531   24369 ssh_runner.go:195] Run: systemctl --version
I0915 06:51:26.785559   24369 main.go:141] libmachine: (functional-884523) Calling .GetSSHHostname
I0915 06:51:26.787928   24369 main.go:141] libmachine: (functional-884523) DBG | domain functional-884523 has defined MAC address 52:54:00:46:80:2e in network mk-functional-884523
I0915 06:51:26.788380   24369 main.go:141] libmachine: (functional-884523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:80:2e", ip: ""} in network mk-functional-884523: {Iface:virbr1 ExpiryTime:2024-09-15 07:48:26 +0000 UTC Type:0 Mac:52:54:00:46:80:2e Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-884523 Clientid:01:52:54:00:46:80:2e}
I0915 06:51:26.788408   24369 main.go:141] libmachine: (functional-884523) DBG | domain functional-884523 has defined IP address 192.168.39.88 and MAC address 52:54:00:46:80:2e in network mk-functional-884523
I0915 06:51:26.788562   24369 main.go:141] libmachine: (functional-884523) Calling .GetSSHPort
I0915 06:51:26.788708   24369 main.go:141] libmachine: (functional-884523) Calling .GetSSHKeyPath
I0915 06:51:26.788836   24369 main.go:141] libmachine: (functional-884523) Calling .GetSSHUsername
I0915 06:51:26.788968   24369 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/functional-884523/id_rsa Username:docker}
I0915 06:51:26.873131   24369 ssh_runner.go:195] Run: sudo crictl images --output json
I0915 06:51:26.908361   24369 main.go:141] libmachine: Making call to close driver server
I0915 06:51:26.908379   24369 main.go:141] libmachine: (functional-884523) Calling .Close
I0915 06:51:26.908637   24369 main.go:141] libmachine: Successfully made call to close driver server
I0915 06:51:26.908660   24369 main.go:141] libmachine: Making call to close connection to plugin binary
I0915 06:51:26.908683   24369 main.go:141] libmachine: Making call to close driver server
I0915 06:51:26.908639   24369 main.go:141] libmachine: (functional-884523) DBG | Closing plugin on server side
I0915 06:51:26.908691   24369 main.go:141] libmachine: (functional-884523) Calling .Close
I0915 06:51:26.908899   24369 main.go:141] libmachine: Successfully made call to close driver server
I0915 06:51:26.908914   24369 main.go:141] libmachine: Making call to close connection to plugin binary
I0915 06:51:26.908987   24369 main.go:141] libmachine: (functional-884523) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-884523 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| localhost/kicbase/echo-server           | functional-884523  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-884523  | 23554aa6f23a5 | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-884523 image ls --format table --alsologtostderr:
I0915 06:51:27.202606   24417 out.go:345] Setting OutFile to fd 1 ...
I0915 06:51:27.202733   24417 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:27.202743   24417 out.go:358] Setting ErrFile to fd 2...
I0915 06:51:27.202750   24417 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:27.203056   24417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
I0915 06:51:27.203885   24417 config.go:182] Loaded profile config "functional-884523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:27.204045   24417 config.go:182] Loaded profile config "functional-884523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:27.204670   24417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0915 06:51:27.204715   24417 main.go:141] libmachine: Launching plugin server for driver kvm2
I0915 06:51:27.219819   24417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43553
I0915 06:51:27.220341   24417 main.go:141] libmachine: () Calling .GetVersion
I0915 06:51:27.220898   24417 main.go:141] libmachine: Using API Version  1
I0915 06:51:27.220922   24417 main.go:141] libmachine: () Calling .SetConfigRaw
I0915 06:51:27.221312   24417 main.go:141] libmachine: () Calling .GetMachineName
I0915 06:51:27.221505   24417 main.go:141] libmachine: (functional-884523) Calling .GetState
I0915 06:51:27.223762   24417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0915 06:51:27.223805   24417 main.go:141] libmachine: Launching plugin server for driver kvm2
I0915 06:51:27.239234   24417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
I0915 06:51:27.239762   24417 main.go:141] libmachine: () Calling .GetVersion
I0915 06:51:27.240321   24417 main.go:141] libmachine: Using API Version  1
I0915 06:51:27.240342   24417 main.go:141] libmachine: () Calling .SetConfigRaw
I0915 06:51:27.240729   24417 main.go:141] libmachine: () Calling .GetMachineName
I0915 06:51:27.240897   24417 main.go:141] libmachine: (functional-884523) Calling .DriverName
I0915 06:51:27.241117   24417 ssh_runner.go:195] Run: systemctl --version
I0915 06:51:27.241157   24417 main.go:141] libmachine: (functional-884523) Calling .GetSSHHostname
I0915 06:51:27.244069   24417 main.go:141] libmachine: (functional-884523) DBG | domain functional-884523 has defined MAC address 52:54:00:46:80:2e in network mk-functional-884523
I0915 06:51:27.244484   24417 main.go:141] libmachine: (functional-884523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:80:2e", ip: ""} in network mk-functional-884523: {Iface:virbr1 ExpiryTime:2024-09-15 07:48:26 +0000 UTC Type:0 Mac:52:54:00:46:80:2e Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-884523 Clientid:01:52:54:00:46:80:2e}
I0915 06:51:27.244519   24417 main.go:141] libmachine: (functional-884523) DBG | domain functional-884523 has defined IP address 192.168.39.88 and MAC address 52:54:00:46:80:2e in network mk-functional-884523
I0915 06:51:27.244628   24417 main.go:141] libmachine: (functional-884523) Calling .GetSSHPort
I0915 06:51:27.244797   24417 main.go:141] libmachine: (functional-884523) Calling .GetSSHKeyPath
I0915 06:51:27.244954   24417 main.go:141] libmachine: (functional-884523) Calling .GetSSHUsername
I0915 06:51:27.245094   24417 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/functional-884523/id_rsa Username:docker}
I0915 06:51:27.329898   24417 ssh_runner.go:195] Run: sudo crictl images --output json
I0915 06:51:27.393279   24417 main.go:141] libmachine: Making call to close driver server
I0915 06:51:27.393299   24417 main.go:141] libmachine: (functional-884523) Calling .Close
I0915 06:51:27.393566   24417 main.go:141] libmachine: Successfully made call to close driver server
I0915 06:51:27.393566   24417 main.go:141] libmachine: (functional-884523) DBG | Closing plugin on server side
I0915 06:51:27.393593   24417 main.go:141] libmachine: Making call to close connection to plugin binary
I0915 06:51:27.393603   24417 main.go:141] libmachine: Making call to close driver server
I0915 06:51:27.393610   24417 main.go:141] libmachine: (functional-884523) Calling .Close
I0915 06:51:27.393832   24417 main.go:141] libmachine: (functional-884523) DBG | Closing plugin on server side
I0915 06:51:27.393880   24417 main.go:141] libmachine: Successfully made call to close driver server
I0915 06:51:27.393897   24417 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-884523 image ls --format json --alsologtostderr:
[{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"23554aa6f23a587b5e2d88c0a950093c90c5b3ff945dd19a7ec34f006dcb9af0","repoDigests":["localhost/minikube-local-cache-test@sha256:64cf6116d095f9faee5129fea6ec361ea1f214adcb593342909ddb03552c498f"],"repoTags":["localhost/minikube-local-cache-test:functional-884523"],"size":"3330"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"17
5ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295
f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fb
bb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb2
6bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"9056ab77afb8e18e04303f11
000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-884523"],"size":"4943877"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-884523 image ls --format json --alsologtostderr:
I0915 06:51:26.953277   24393 out.go:345] Setting OutFile to fd 1 ...
I0915 06:51:26.953393   24393 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:26.953401   24393 out.go:358] Setting ErrFile to fd 2...
I0915 06:51:26.953406   24393 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:26.953565   24393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
I0915 06:51:26.954150   24393 config.go:182] Loaded profile config "functional-884523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:26.954242   24393 config.go:182] Loaded profile config "functional-884523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:26.954617   24393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0915 06:51:26.954653   24393 main.go:141] libmachine: Launching plugin server for driver kvm2
I0915 06:51:26.969709   24393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41327
I0915 06:51:26.970254   24393 main.go:141] libmachine: () Calling .GetVersion
I0915 06:51:26.970918   24393 main.go:141] libmachine: Using API Version  1
I0915 06:51:26.970950   24393 main.go:141] libmachine: () Calling .SetConfigRaw
I0915 06:51:26.971295   24393 main.go:141] libmachine: () Calling .GetMachineName
I0915 06:51:26.971530   24393 main.go:141] libmachine: (functional-884523) Calling .GetState
I0915 06:51:26.973360   24393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0915 06:51:26.973402   24393 main.go:141] libmachine: Launching plugin server for driver kvm2
I0915 06:51:26.988094   24393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
I0915 06:51:26.988568   24393 main.go:141] libmachine: () Calling .GetVersion
I0915 06:51:26.989101   24393 main.go:141] libmachine: Using API Version  1
I0915 06:51:26.989132   24393 main.go:141] libmachine: () Calling .SetConfigRaw
I0915 06:51:26.989454   24393 main.go:141] libmachine: () Calling .GetMachineName
I0915 06:51:26.989623   24393 main.go:141] libmachine: (functional-884523) Calling .DriverName
I0915 06:51:26.989848   24393 ssh_runner.go:195] Run: systemctl --version
I0915 06:51:26.989880   24393 main.go:141] libmachine: (functional-884523) Calling .GetSSHHostname
I0915 06:51:26.992710   24393 main.go:141] libmachine: (functional-884523) DBG | domain functional-884523 has defined MAC address 52:54:00:46:80:2e in network mk-functional-884523
I0915 06:51:26.993101   24393 main.go:141] libmachine: (functional-884523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:80:2e", ip: ""} in network mk-functional-884523: {Iface:virbr1 ExpiryTime:2024-09-15 07:48:26 +0000 UTC Type:0 Mac:52:54:00:46:80:2e Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-884523 Clientid:01:52:54:00:46:80:2e}
I0915 06:51:26.993129   24393 main.go:141] libmachine: (functional-884523) DBG | domain functional-884523 has defined IP address 192.168.39.88 and MAC address 52:54:00:46:80:2e in network mk-functional-884523
I0915 06:51:26.993257   24393 main.go:141] libmachine: (functional-884523) Calling .GetSSHPort
I0915 06:51:26.993411   24393 main.go:141] libmachine: (functional-884523) Calling .GetSSHKeyPath
I0915 06:51:26.993543   24393 main.go:141] libmachine: (functional-884523) Calling .GetSSHUsername
I0915 06:51:26.993633   24393 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/functional-884523/id_rsa Username:docker}
I0915 06:51:27.095285   24393 ssh_runner.go:195] Run: sudo crictl images --output json
I0915 06:51:27.144514   24393 main.go:141] libmachine: Making call to close driver server
I0915 06:51:27.144529   24393 main.go:141] libmachine: (functional-884523) Calling .Close
I0915 06:51:27.144805   24393 main.go:141] libmachine: Successfully made call to close driver server
I0915 06:51:27.144825   24393 main.go:141] libmachine: Making call to close connection to plugin binary
I0915 06:51:27.144838   24393 main.go:141] libmachine: (functional-884523) DBG | Closing plugin on server side
I0915 06:51:27.144849   24393 main.go:141] libmachine: Making call to close driver server
I0915 06:51:27.144858   24393 main.go:141] libmachine: (functional-884523) Calling .Close
I0915 06:51:27.145083   24393 main.go:141] libmachine: Successfully made call to close driver server
I0915 06:51:27.145109   24393 main.go:141] libmachine: Making call to close connection to plugin binary
I0915 06:51:27.145132   24393 main.go:141] libmachine: (functional-884523) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-884523 image ls --format yaml --alsologtostderr:
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 23554aa6f23a587b5e2d88c0a950093c90c5b3ff945dd19a7ec34f006dcb9af0
repoDigests:
- localhost/minikube-local-cache-test@sha256:64cf6116d095f9faee5129fea6ec361ea1f214adcb593342909ddb03552c498f
repoTags:
- localhost/minikube-local-cache-test:functional-884523
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-884523
size: "4943877"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-884523 image ls --format yaml --alsologtostderr:
I0915 06:51:27.449956   24441 out.go:345] Setting OutFile to fd 1 ...
I0915 06:51:27.450063   24441 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:27.450073   24441 out.go:358] Setting ErrFile to fd 2...
I0915 06:51:27.450079   24441 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:27.450377   24441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
I0915 06:51:27.451111   24441 config.go:182] Loaded profile config "functional-884523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:27.451257   24441 config.go:182] Loaded profile config "functional-884523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:27.451779   24441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0915 06:51:27.451830   24441 main.go:141] libmachine: Launching plugin server for driver kvm2
I0915 06:51:27.467078   24441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33335
I0915 06:51:27.467528   24441 main.go:141] libmachine: () Calling .GetVersion
I0915 06:51:27.468126   24441 main.go:141] libmachine: Using API Version  1
I0915 06:51:27.468149   24441 main.go:141] libmachine: () Calling .SetConfigRaw
I0915 06:51:27.468578   24441 main.go:141] libmachine: () Calling .GetMachineName
I0915 06:51:27.468762   24441 main.go:141] libmachine: (functional-884523) Calling .GetState
I0915 06:51:27.470719   24441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0915 06:51:27.470759   24441 main.go:141] libmachine: Launching plugin server for driver kvm2
I0915 06:51:27.486030   24441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46879
I0915 06:51:27.486490   24441 main.go:141] libmachine: () Calling .GetVersion
I0915 06:51:27.486947   24441 main.go:141] libmachine: Using API Version  1
I0915 06:51:27.486971   24441 main.go:141] libmachine: () Calling .SetConfigRaw
I0915 06:51:27.487406   24441 main.go:141] libmachine: () Calling .GetMachineName
I0915 06:51:27.487617   24441 main.go:141] libmachine: (functional-884523) Calling .DriverName
I0915 06:51:27.487876   24441 ssh_runner.go:195] Run: systemctl --version
I0915 06:51:27.487912   24441 main.go:141] libmachine: (functional-884523) Calling .GetSSHHostname
I0915 06:51:27.490776   24441 main.go:141] libmachine: (functional-884523) DBG | domain functional-884523 has defined MAC address 52:54:00:46:80:2e in network mk-functional-884523
I0915 06:51:27.491206   24441 main.go:141] libmachine: (functional-884523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:80:2e", ip: ""} in network mk-functional-884523: {Iface:virbr1 ExpiryTime:2024-09-15 07:48:26 +0000 UTC Type:0 Mac:52:54:00:46:80:2e Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-884523 Clientid:01:52:54:00:46:80:2e}
I0915 06:51:27.491238   24441 main.go:141] libmachine: (functional-884523) DBG | domain functional-884523 has defined IP address 192.168.39.88 and MAC address 52:54:00:46:80:2e in network mk-functional-884523
I0915 06:51:27.491339   24441 main.go:141] libmachine: (functional-884523) Calling .GetSSHPort
I0915 06:51:27.491482   24441 main.go:141] libmachine: (functional-884523) Calling .GetSSHKeyPath
I0915 06:51:27.491616   24441 main.go:141] libmachine: (functional-884523) Calling .GetSSHUsername
I0915 06:51:27.491776   24441 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/functional-884523/id_rsa Username:docker}
I0915 06:51:27.610210   24441 ssh_runner.go:195] Run: sudo crictl images --output json
I0915 06:51:27.664924   24441 main.go:141] libmachine: Making call to close driver server
I0915 06:51:27.664934   24441 main.go:141] libmachine: (functional-884523) Calling .Close
I0915 06:51:27.665224   24441 main.go:141] libmachine: Successfully made call to close driver server
I0915 06:51:27.665241   24441 main.go:141] libmachine: Making call to close connection to plugin binary
I0915 06:51:27.665255   24441 main.go:141] libmachine: Making call to close driver server
I0915 06:51:27.665262   24441 main.go:141] libmachine: (functional-884523) Calling .Close
I0915 06:51:27.665272   24441 main.go:141] libmachine: (functional-884523) DBG | Closing plugin on server side
I0915 06:51:27.665500   24441 main.go:141] libmachine: Successfully made call to close driver server
I0915 06:51:27.665517   24441 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884523 ssh pgrep buildkitd: exit status 1 (226.474379ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image build -t localhost/my-image:functional-884523 testdata/build --alsologtostderr
2024/09/15 06:51:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-884523 image build -t localhost/my-image:functional-884523 testdata/build --alsologtostderr: (5.805107764s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-884523 image build -t localhost/my-image:functional-884523 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c90356d9ec2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-884523
--> f5c4488bba4
Successfully tagged localhost/my-image:functional-884523
f5c4488bba4847b0ef2fed074ac6df900ebf60860e395165b7df60738ff68527
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-884523 image build -t localhost/my-image:functional-884523 testdata/build --alsologtostderr:
I0915 06:51:27.941359   24494 out.go:345] Setting OutFile to fd 1 ...
I0915 06:51:27.941524   24494 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:27.941536   24494 out.go:358] Setting ErrFile to fd 2...
I0915 06:51:27.941544   24494 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:27.941834   24494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
I0915 06:51:27.942695   24494 config.go:182] Loaded profile config "functional-884523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:27.943346   24494 config.go:182] Loaded profile config "functional-884523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:27.943852   24494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0915 06:51:27.943895   24494 main.go:141] libmachine: Launching plugin server for driver kvm2
I0915 06:51:27.959627   24494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34117
I0915 06:51:27.960197   24494 main.go:141] libmachine: () Calling .GetVersion
I0915 06:51:27.960823   24494 main.go:141] libmachine: Using API Version  1
I0915 06:51:27.960857   24494 main.go:141] libmachine: () Calling .SetConfigRaw
I0915 06:51:27.961280   24494 main.go:141] libmachine: () Calling .GetMachineName
I0915 06:51:27.961487   24494 main.go:141] libmachine: (functional-884523) Calling .GetState
I0915 06:51:27.963350   24494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0915 06:51:27.963401   24494 main.go:141] libmachine: Launching plugin server for driver kvm2
I0915 06:51:27.978493   24494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36905
I0915 06:51:27.979694   24494 main.go:141] libmachine: () Calling .GetVersion
I0915 06:51:27.980220   24494 main.go:141] libmachine: Using API Version  1
I0915 06:51:27.980239   24494 main.go:141] libmachine: () Calling .SetConfigRaw
I0915 06:51:27.980615   24494 main.go:141] libmachine: () Calling .GetMachineName
I0915 06:51:27.980845   24494 main.go:141] libmachine: (functional-884523) Calling .DriverName
I0915 06:51:27.981016   24494 ssh_runner.go:195] Run: systemctl --version
I0915 06:51:27.981050   24494 main.go:141] libmachine: (functional-884523) Calling .GetSSHHostname
I0915 06:51:27.984579   24494 main.go:141] libmachine: (functional-884523) DBG | domain functional-884523 has defined MAC address 52:54:00:46:80:2e in network mk-functional-884523
I0915 06:51:27.985030   24494 main.go:141] libmachine: (functional-884523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:80:2e", ip: ""} in network mk-functional-884523: {Iface:virbr1 ExpiryTime:2024-09-15 07:48:26 +0000 UTC Type:0 Mac:52:54:00:46:80:2e Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-884523 Clientid:01:52:54:00:46:80:2e}
I0915 06:51:27.985061   24494 main.go:141] libmachine: (functional-884523) DBG | domain functional-884523 has defined IP address 192.168.39.88 and MAC address 52:54:00:46:80:2e in network mk-functional-884523
I0915 06:51:27.985419   24494 main.go:141] libmachine: (functional-884523) Calling .GetSSHPort
I0915 06:51:27.985698   24494 main.go:141] libmachine: (functional-884523) Calling .GetSSHKeyPath
I0915 06:51:27.985881   24494 main.go:141] libmachine: (functional-884523) Calling .GetSSHUsername
I0915 06:51:27.986054   24494 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/functional-884523/id_rsa Username:docker}
I0915 06:51:28.109392   24494 build_images.go:161] Building image from path: /tmp/build.3231966106.tar
I0915 06:51:28.109451   24494 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0915 06:51:28.121410   24494 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3231966106.tar
I0915 06:51:28.130067   24494 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3231966106.tar: stat -c "%s %y" /var/lib/minikube/build/build.3231966106.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3231966106.tar': No such file or directory
I0915 06:51:28.130148   24494 ssh_runner.go:362] scp /tmp/build.3231966106.tar --> /var/lib/minikube/build/build.3231966106.tar (3072 bytes)
I0915 06:51:28.177975   24494 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3231966106
I0915 06:51:28.210975   24494 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3231966106 -xf /var/lib/minikube/build/build.3231966106.tar
I0915 06:51:28.243863   24494 crio.go:315] Building image: /var/lib/minikube/build/build.3231966106
I0915 06:51:28.243941   24494 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-884523 /var/lib/minikube/build/build.3231966106 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0915 06:51:33.675571   24494 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-884523 /var/lib/minikube/build/build.3231966106 --cgroup-manager=cgroupfs: (5.431606534s)
I0915 06:51:33.675640   24494 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3231966106
I0915 06:51:33.686708   24494 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3231966106.tar
I0915 06:51:33.696857   24494 build_images.go:217] Built localhost/my-image:functional-884523 from /tmp/build.3231966106.tar
I0915 06:51:33.696888   24494 build_images.go:133] succeeded building to: functional-884523
I0915 06:51:33.696892   24494 build_images.go:134] failed building to: 
I0915 06:51:33.696914   24494 main.go:141] libmachine: Making call to close driver server
I0915 06:51:33.696925   24494 main.go:141] libmachine: (functional-884523) Calling .Close
I0915 06:51:33.697286   24494 main.go:141] libmachine: Successfully made call to close driver server
I0915 06:51:33.697313   24494 main.go:141] libmachine: (functional-884523) DBG | Closing plugin on server side
I0915 06:51:33.697335   24494 main.go:141] libmachine: Making call to close connection to plugin binary
I0915 06:51:33.697351   24494 main.go:141] libmachine: Making call to close driver server
I0915 06:51:33.697357   24494 main.go:141] libmachine: (functional-884523) Calling .Close
I0915 06:51:33.697675   24494 main.go:141] libmachine: (functional-884523) DBG | Closing plugin on server side
I0915 06:51:33.697717   24494 main.go:141] libmachine: Successfully made call to close driver server
I0915 06:51:33.697738   24494 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.955008437s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-884523
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.88:31683
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-884523 /tmp/TestFunctionalparallelMountCmdspecific-port383094646/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884523 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (273.036179ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-884523 /tmp/TestFunctionalparallelMountCmdspecific-port383094646/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884523 ssh "sudo umount -f /mount-9p": exit status 1 (243.527162ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-884523 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-884523 /tmp/TestFunctionalparallelMountCmdspecific-port383094646/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image load --daemon kicbase/echo-server:functional-884523 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-884523 image load --daemon kicbase/echo-server:functional-884523 --alsologtostderr: (3.50574722s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-884523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1479733784/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-884523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1479733784/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-884523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1479733784/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-884523 ssh "findmnt -T" /mount1: exit status 1 (207.423532ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-884523 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-884523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1479733784/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-884523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1479733784/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-884523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1479733784/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image load --daemon kicbase/echo-server:functional-884523 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-884523
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image load --daemon kicbase/echo-server:functional-884523 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image save kicbase/echo-server:functional-884523 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image rm kicbase/echo-server:functional-884523 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-884523
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-884523 image save --daemon kicbase/echo-server:functional-884523 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-884523
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-884523
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-884523
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-884523
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-670527 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0915 07:02:56.198017   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:04:19.264592   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-670527 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m19.827346962s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (200.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-670527 -- rollout status deployment/busybox: (5.936424103s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-4cgxn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-gxwp9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-rvbkj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-4cgxn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-gxwp9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-rvbkj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-4cgxn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-gxwp9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-rvbkj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-4cgxn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-4cgxn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-gxwp9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-gxwp9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-rvbkj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670527 -- exec busybox-7dff88458-rvbkj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-670527 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-670527 -v=7 --alsologtostderr: (56.974119272s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-670527 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp testdata/cp-test.txt ha-670527:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2302607583/001/cp-test_ha-670527.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527:/home/docker/cp-test.txt ha-670527-m02:/home/docker/cp-test_ha-670527_ha-670527-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m02 "sudo cat /home/docker/cp-test_ha-670527_ha-670527-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527:/home/docker/cp-test.txt ha-670527-m03:/home/docker/cp-test_ha-670527_ha-670527-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m03 "sudo cat /home/docker/cp-test_ha-670527_ha-670527-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527:/home/docker/cp-test.txt ha-670527-m04:/home/docker/cp-test_ha-670527_ha-670527-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m04 "sudo cat /home/docker/cp-test_ha-670527_ha-670527-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp testdata/cp-test.txt ha-670527-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2302607583/001/cp-test_ha-670527-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527-m02:/home/docker/cp-test.txt ha-670527:/home/docker/cp-test_ha-670527-m02_ha-670527.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527 "sudo cat /home/docker/cp-test_ha-670527-m02_ha-670527.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527-m02:/home/docker/cp-test.txt ha-670527-m03:/home/docker/cp-test_ha-670527-m02_ha-670527-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m03 "sudo cat /home/docker/cp-test_ha-670527-m02_ha-670527-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527-m02:/home/docker/cp-test.txt ha-670527-m04:/home/docker/cp-test_ha-670527-m02_ha-670527-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m04 "sudo cat /home/docker/cp-test_ha-670527-m02_ha-670527-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp testdata/cp-test.txt ha-670527-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2302607583/001/cp-test_ha-670527-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527-m03:/home/docker/cp-test.txt ha-670527:/home/docker/cp-test_ha-670527-m03_ha-670527.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527 "sudo cat /home/docker/cp-test_ha-670527-m03_ha-670527.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527-m03:/home/docker/cp-test.txt ha-670527-m02:/home/docker/cp-test_ha-670527-m03_ha-670527-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m02 "sudo cat /home/docker/cp-test_ha-670527-m03_ha-670527-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527-m03:/home/docker/cp-test.txt ha-670527-m04:/home/docker/cp-test_ha-670527-m03_ha-670527-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m04 "sudo cat /home/docker/cp-test_ha-670527-m03_ha-670527-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp testdata/cp-test.txt ha-670527-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2302607583/001/cp-test_ha-670527-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt ha-670527:/home/docker/cp-test_ha-670527-m04_ha-670527.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527 "sudo cat /home/docker/cp-test_ha-670527-m04_ha-670527.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt ha-670527-m02:/home/docker/cp-test_ha-670527-m04_ha-670527-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m02 "sudo cat /home/docker/cp-test_ha-670527-m04_ha-670527-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 cp ha-670527-m04:/home/docker/cp-test.txt ha-670527-m03:/home/docker/cp-test_ha-670527-m04_ha-670527-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m04 "sudo cat /home/docker/cp-test.txt"
E0915 07:06:02.684197   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:02.690683   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:02.702475   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 ssh -n ha-670527-m03 "sudo cat /home/docker/cp-test_ha-670527-m04_ha-670527-m03.txt"
E0915 07:06:02.724114   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:02.765597   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:02.847033   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.465407469s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-670527 node delete m03 -v=7 --alsologtostderr: (15.80893196s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (352.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-670527 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0915 07:20:59.267563   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:21:02.684682   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:22:25.747562   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:22:56.196953   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-670527 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m51.820430243s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (352.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-670527 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-670527 --control-plane -v=7 --alsologtostderr: (1m20.0684659s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-670527 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.23s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-466615 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0915 07:26:02.684078   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-466615 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (56.233744908s)
--- PASS: TestJSONOutput/start/Command (56.23s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-466615 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-466615 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-466615 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-466615 --output=json --user=testUser: (7.344760771s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-316930 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-316930 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.772833ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a23ec36e-2c68-4f20-893b-b91e22f6bd10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-316930] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"45411df5-bc7d-41d9-ba77-7a3e09fc792d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19644"}}
	{"specversion":"1.0","id":"c4b428a0-1934-4c90-93f5-15d7ea4c488d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a174b591-0b28-4a71-b2b5-bcc06c4fb763","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig"}}
	{"specversion":"1.0","id":"0101d4fb-b0c4-42da-8035-ce3e3c3eaeb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube"}}
	{"specversion":"1.0","id":"8c9d6ce6-6cff-4706-bee2-39ba4a987650","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"15e354b7-d8ff-4558-a1da-5e81bf7764de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8be58ff5-4bca-4c20-8a05-79efc8109d5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-316930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-316930
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (87.46s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-524082 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-524082 --driver=kvm2  --container-runtime=crio: (41.329012092s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-536686 --driver=kvm2  --container-runtime=crio
E0915 07:27:56.196843   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-536686 --driver=kvm2  --container-runtime=crio: (43.440196813s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-524082
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-536686
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-536686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-536686
helpers_test.go:175: Cleaning up "first-524082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-524082
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-524082: (1.043479888s)
--- PASS: TestMinikubeProfile (87.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-938945 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-938945 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.817634099s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-938945 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-938945 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-949688 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-949688 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.526776962s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949688 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949688 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-938945 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949688 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949688 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-949688
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-949688: (1.263938704s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-949688
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-949688: (20.292186076s)
--- PASS: TestMountStart/serial/RestartStopped (21.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949688 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949688 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-127008 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0915 07:31:02.684205   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-127008 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.793867424s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-127008 -- rollout status deployment/busybox: (4.800059377s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- exec busybox-7dff88458-52cww -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- exec busybox-7dff88458-zzxt7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- exec busybox-7dff88458-52cww -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- exec busybox-7dff88458-zzxt7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- exec busybox-7dff88458-52cww -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- exec busybox-7dff88458-zzxt7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.27s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- exec busybox-7dff88458-52cww -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- exec busybox-7dff88458-52cww -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- exec busybox-7dff88458-zzxt7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-127008 -- exec busybox-7dff88458-zzxt7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-127008 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-127008 -v 3 --alsologtostderr: (50.372779304s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.94s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-127008 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 cp testdata/cp-test.txt multinode-127008:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 cp multinode-127008:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4167936864/001/cp-test_multinode-127008.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 cp multinode-127008:/home/docker/cp-test.txt multinode-127008-m02:/home/docker/cp-test_multinode-127008_multinode-127008-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008-m02 "sudo cat /home/docker/cp-test_multinode-127008_multinode-127008-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 cp multinode-127008:/home/docker/cp-test.txt multinode-127008-m03:/home/docker/cp-test_multinode-127008_multinode-127008-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008-m03 "sudo cat /home/docker/cp-test_multinode-127008_multinode-127008-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 cp testdata/cp-test.txt multinode-127008-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 cp multinode-127008-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4167936864/001/cp-test_multinode-127008-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 cp multinode-127008-m02:/home/docker/cp-test.txt multinode-127008:/home/docker/cp-test_multinode-127008-m02_multinode-127008.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008 "sudo cat /home/docker/cp-test_multinode-127008-m02_multinode-127008.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 cp multinode-127008-m02:/home/docker/cp-test.txt multinode-127008-m03:/home/docker/cp-test_multinode-127008-m02_multinode-127008-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008-m03 "sudo cat /home/docker/cp-test_multinode-127008-m02_multinode-127008-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 cp testdata/cp-test.txt multinode-127008-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 cp multinode-127008-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4167936864/001/cp-test_multinode-127008-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 cp multinode-127008-m03:/home/docker/cp-test.txt multinode-127008:/home/docker/cp-test_multinode-127008-m03_multinode-127008.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008 "sudo cat /home/docker/cp-test_multinode-127008-m03_multinode-127008.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 cp multinode-127008-m03:/home/docker/cp-test.txt multinode-127008-m02:/home/docker/cp-test_multinode-127008-m03_multinode-127008-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 ssh -n multinode-127008-m02 "sudo cat /home/docker/cp-test_multinode-127008-m03_multinode-127008-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-127008 node stop m03: (1.510607146s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-127008 status: exit status 7 (422.426317ms)

                                                
                                                
-- stdout --
	multinode-127008
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-127008-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-127008-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-127008 status --alsologtostderr: exit status 7 (419.900514ms)

                                                
                                                
-- stdout --
	multinode-127008
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-127008-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-127008-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:32:19.187882   44231 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:32:19.188105   44231 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:32:19.188113   44231 out.go:358] Setting ErrFile to fd 2...
	I0915 07:32:19.188117   44231 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:32:19.188296   44231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-6166/.minikube/bin
	I0915 07:32:19.188458   44231 out.go:352] Setting JSON to false
	I0915 07:32:19.188487   44231 mustload.go:65] Loading cluster: multinode-127008
	I0915 07:32:19.188572   44231 notify.go:220] Checking for updates...
	I0915 07:32:19.188915   44231 config.go:182] Loaded profile config "multinode-127008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:32:19.188933   44231 status.go:255] checking status of multinode-127008 ...
	I0915 07:32:19.189396   44231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:32:19.189456   44231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:32:19.204627   44231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40013
	I0915 07:32:19.205081   44231 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:32:19.205654   44231 main.go:141] libmachine: Using API Version  1
	I0915 07:32:19.205669   44231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:32:19.206052   44231 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:32:19.206262   44231 main.go:141] libmachine: (multinode-127008) Calling .GetState
	I0915 07:32:19.207876   44231 status.go:330] multinode-127008 host status = "Running" (err=<nil>)
	I0915 07:32:19.207890   44231 host.go:66] Checking if "multinode-127008" exists ...
	I0915 07:32:19.208183   44231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:32:19.208218   44231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:32:19.223275   44231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42027
	I0915 07:32:19.223773   44231 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:32:19.224303   44231 main.go:141] libmachine: Using API Version  1
	I0915 07:32:19.224323   44231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:32:19.224631   44231 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:32:19.224889   44231 main.go:141] libmachine: (multinode-127008) Calling .GetIP
	I0915 07:32:19.227975   44231 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:32:19.228369   44231 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:32:19.228400   44231 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:32:19.228505   44231 host.go:66] Checking if "multinode-127008" exists ...
	I0915 07:32:19.228785   44231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:32:19.228826   44231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:32:19.243689   44231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34609
	I0915 07:32:19.244068   44231 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:32:19.244523   44231 main.go:141] libmachine: Using API Version  1
	I0915 07:32:19.244540   44231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:32:19.244823   44231 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:32:19.244998   44231 main.go:141] libmachine: (multinode-127008) Calling .DriverName
	I0915 07:32:19.245175   44231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:32:19.245193   44231 main.go:141] libmachine: (multinode-127008) Calling .GetSSHHostname
	I0915 07:32:19.247647   44231 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:32:19.248087   44231 main.go:141] libmachine: (multinode-127008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0d:95", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:29:35 +0000 UTC Type:0 Mac:52:54:00:b5:0d:95 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:multinode-127008 Clientid:01:52:54:00:b5:0d:95}
	I0915 07:32:19.248119   44231 main.go:141] libmachine: (multinode-127008) DBG | domain multinode-127008 has defined IP address 192.168.39.241 and MAC address 52:54:00:b5:0d:95 in network mk-multinode-127008
	I0915 07:32:19.248314   44231 main.go:141] libmachine: (multinode-127008) Calling .GetSSHPort
	I0915 07:32:19.248494   44231 main.go:141] libmachine: (multinode-127008) Calling .GetSSHKeyPath
	I0915 07:32:19.248633   44231 main.go:141] libmachine: (multinode-127008) Calling .GetSSHUsername
	I0915 07:32:19.248766   44231 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/multinode-127008/id_rsa Username:docker}
	I0915 07:32:19.329966   44231 ssh_runner.go:195] Run: systemctl --version
	I0915 07:32:19.336590   44231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:32:19.351759   44231 kubeconfig.go:125] found "multinode-127008" server: "https://192.168.39.241:8443"
	I0915 07:32:19.351796   44231 api_server.go:166] Checking apiserver status ...
	I0915 07:32:19.351844   44231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:32:19.365750   44231 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1059/cgroup
	W0915 07:32:19.374955   44231 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1059/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:32:19.375021   44231 ssh_runner.go:195] Run: ls
	I0915 07:32:19.384535   44231 api_server.go:253] Checking apiserver healthz at https://192.168.39.241:8443/healthz ...
	I0915 07:32:19.389179   44231 api_server.go:279] https://192.168.39.241:8443/healthz returned 200:
	ok
	I0915 07:32:19.389199   44231 status.go:422] multinode-127008 apiserver status = Running (err=<nil>)
	I0915 07:32:19.389208   44231 status.go:257] multinode-127008 status: &{Name:multinode-127008 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:32:19.389223   44231 status.go:255] checking status of multinode-127008-m02 ...
	I0915 07:32:19.389518   44231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:32:19.389549   44231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:32:19.404510   44231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I0915 07:32:19.404925   44231 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:32:19.405386   44231 main.go:141] libmachine: Using API Version  1
	I0915 07:32:19.405410   44231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:32:19.405710   44231 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:32:19.405980   44231 main.go:141] libmachine: (multinode-127008-m02) Calling .GetState
	I0915 07:32:19.407728   44231 status.go:330] multinode-127008-m02 host status = "Running" (err=<nil>)
	I0915 07:32:19.407746   44231 host.go:66] Checking if "multinode-127008-m02" exists ...
	I0915 07:32:19.408011   44231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:32:19.408049   44231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:32:19.422500   44231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40513
	I0915 07:32:19.422812   44231 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:32:19.423218   44231 main.go:141] libmachine: Using API Version  1
	I0915 07:32:19.423239   44231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:32:19.423546   44231 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:32:19.423734   44231 main.go:141] libmachine: (multinode-127008-m02) Calling .GetIP
	I0915 07:32:19.426382   44231 main.go:141] libmachine: (multinode-127008-m02) DBG | domain multinode-127008-m02 has defined MAC address 52:54:00:bb:b3:19 in network mk-multinode-127008
	I0915 07:32:19.426853   44231 main.go:141] libmachine: (multinode-127008-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:b3:19", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:30:35 +0000 UTC Type:0 Mac:52:54:00:bb:b3:19 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:multinode-127008-m02 Clientid:01:52:54:00:bb:b3:19}
	I0915 07:32:19.426871   44231 main.go:141] libmachine: (multinode-127008-m02) DBG | domain multinode-127008-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:bb:b3:19 in network mk-multinode-127008
	I0915 07:32:19.427047   44231 host.go:66] Checking if "multinode-127008-m02" exists ...
	I0915 07:32:19.427454   44231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:32:19.427510   44231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:32:19.441765   44231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
	I0915 07:32:19.442244   44231 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:32:19.442668   44231 main.go:141] libmachine: Using API Version  1
	I0915 07:32:19.442681   44231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:32:19.442957   44231 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:32:19.443145   44231 main.go:141] libmachine: (multinode-127008-m02) Calling .DriverName
	I0915 07:32:19.443274   44231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:32:19.443289   44231 main.go:141] libmachine: (multinode-127008-m02) Calling .GetSSHHostname
	I0915 07:32:19.446068   44231 main.go:141] libmachine: (multinode-127008-m02) DBG | domain multinode-127008-m02 has defined MAC address 52:54:00:bb:b3:19 in network mk-multinode-127008
	I0915 07:32:19.446572   44231 main.go:141] libmachine: (multinode-127008-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:b3:19", ip: ""} in network mk-multinode-127008: {Iface:virbr1 ExpiryTime:2024-09-15 08:30:35 +0000 UTC Type:0 Mac:52:54:00:bb:b3:19 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:multinode-127008-m02 Clientid:01:52:54:00:bb:b3:19}
	I0915 07:32:19.446589   44231 main.go:141] libmachine: (multinode-127008-m02) DBG | domain multinode-127008-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:bb:b3:19 in network mk-multinode-127008
	I0915 07:32:19.446723   44231 main.go:141] libmachine: (multinode-127008-m02) Calling .GetSSHPort
	I0915 07:32:19.446855   44231 main.go:141] libmachine: (multinode-127008-m02) Calling .GetSSHKeyPath
	I0915 07:32:19.447084   44231 main.go:141] libmachine: (multinode-127008-m02) Calling .GetSSHUsername
	I0915 07:32:19.447266   44231 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19644-6166/.minikube/machines/multinode-127008-m02/id_rsa Username:docker}
	I0915 07:32:19.529136   44231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:32:19.543924   44231 status.go:257] multinode-127008-m02 status: &{Name:multinode-127008-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:32:19.543965   44231 status.go:255] checking status of multinode-127008-m03 ...
	I0915 07:32:19.544295   44231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0915 07:32:19.544334   44231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0915 07:32:19.559369   44231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39447
	I0915 07:32:19.559953   44231 main.go:141] libmachine: () Calling .GetVersion
	I0915 07:32:19.560471   44231 main.go:141] libmachine: Using API Version  1
	I0915 07:32:19.560492   44231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0915 07:32:19.560812   44231 main.go:141] libmachine: () Calling .GetMachineName
	I0915 07:32:19.561030   44231 main.go:141] libmachine: (multinode-127008-m03) Calling .GetState
	I0915 07:32:19.562746   44231 status.go:330] multinode-127008-m03 host status = "Stopped" (err=<nil>)
	I0915 07:32:19.562762   44231 status.go:343] host is not running, skipping remaining checks
	I0915 07:32:19.562769   44231 status.go:257] multinode-127008-m03 status: &{Name:multinode-127008-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 node start m03 -v=7 --alsologtostderr
E0915 07:32:56.196430   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-127008 node start m03 -v=7 --alsologtostderr: (38.854213319s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-127008 node delete m03: (1.499201925s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (180.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-127008 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0915 07:41:02.684485   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:42:56.196278   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/addons-368929/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-127008 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m0.254827843s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-127008 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (180.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-127008
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-127008-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-127008-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.871628ms)

                                                
                                                
-- stdout --
	* [multinode-127008-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-127008-m02' is duplicated with machine name 'multinode-127008-m02' in profile 'multinode-127008'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-127008-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-127008-m03 --driver=kvm2  --container-runtime=crio: (46.487108094s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-127008
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-127008: exit status 80 (213.825044ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-127008 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-127008-m03 already exists in multinode-127008-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-127008-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.56s)

                                                
                                    
x
+
TestScheduledStopUnix (110.43s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-660162 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-660162 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.838884877s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-660162 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-660162 -n scheduled-stop-660162
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-660162 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-660162 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-660162 -n scheduled-stop-660162
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-660162
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-660162 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0915 07:51:02.684479   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-660162
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-660162: exit status 7 (64.179726ms)

                                                
                                                
-- stdout --
	scheduled-stop-660162
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-660162 -n scheduled-stop-660162
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-660162 -n scheduled-stop-660162: exit status 7 (64.090319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-660162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-660162
--- PASS: TestScheduledStopUnix (110.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (155.46s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3804428831 start -p running-upgrade-972764 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3804428831 start -p running-upgrade-972764 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m6.729303572s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-972764 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0915 07:56:02.684450   13190 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-6166/.minikube/profiles/functional-884523/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-972764 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.647695603s)
helpers_test.go:175: Cleaning up "running-upgrade-972764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-972764
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-972764: (1.478361563s)
--- PASS: TestRunningBinaryUpgrade (155.46s)

                                                
                                    
x
+
TestPause/serial/Start (168.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-742219 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-742219 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m48.846325124s)
--- PASS: TestPause/serial/Start (168.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (121.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4283691991 start -p stopped-upgrade-112030 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4283691991 start -p stopped-upgrade-112030 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m14.517320393s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4283691991 -p stopped-upgrade-112030 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4283691991 -p stopped-upgrade-112030 stop: (1.470032104s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-112030 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-112030 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.592386832s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (121.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-112030
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-315583 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-315583 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (68.844404ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-315583] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-6166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-6166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-315583 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-315583 --driver=kvm2  --container-runtime=crio: (48.488856407s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-315583 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-315583 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-315583 --no-kubernetes --driver=kvm2  --container-runtime=crio: (16.736228995s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-315583 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-315583 status -o json: exit status 2 (214.357847ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-315583","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-315583
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-315583: (1.011126072s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (26.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-315583 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-315583 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.876849022s)
--- PASS: TestNoKubernetes/serial/Start (26.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-315583 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-315583 "sudo systemctl is-active --quiet service kubelet": exit status 1 (187.399871ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-315583
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-315583: (1.288257864s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-315583 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-315583 --driver=kvm2  --container-runtime=crio: (43.451253883s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-315583 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-315583 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.442365ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    

Test skip (34/222)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard